Skip navigation

Category Archives: Technology

Discussion on any and all tech related items. New tech, tech usage, etc.

 

yea, I am that guy

Ok, so as I stated at the tail end of my last post, here’s your PowerShell to export data from Active Directory to a MSSQL Database.  This code can be easily modified to pull whatever AD info you want and to insert it.  I make use of the System.Data.SqlClient namespace, so it’s worth mentioning that you can also use MySql.Data.MySqlClient or System.Data.OracleClient for your needs.  You can examine what I’m using via the SqlClient namespace and easily correlate it back to your respective need.

 


<#
Script: AD to DB
Author: Daniel Belcher
#>
#$ErrorActionPreference = "silentlycontinue"
$CHECK = Get-Module ActiveDirectory
IF(!$CHECK) {
	Write-Host -ForegroundColor Red `
	"Can't find the ActiveDirectory module, please insure it's installed.`n"
	Break}
ELSE {Import-Module ActiveDirectory}

#\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
<#Variable String values for customizing
	the DB and to write to and OU to lookup against
		as well as the time range in days#>
<#DataBase Name:#>
$DB = "Data"
<#SQL Server Name:#>
$SQLSRVR = "localhost\sqlexpress"
<#Table to Create:#>
$TABLE = "ADExport"	
<#Days old based on activity according to passwordlastset:#>
$DAYS = 180
<#Root OU to Search:#>
$ORGUNIT = "Root OU to Search"
<#Table Create String#>
$CREATE = "CREATE TABLE $TABLE (AD_Machine varchar(150) not null PRIMARY KEY, `
	OU varchar(100),DistinguishedName varchar(max),PasswordLastSet datetime, `
	OS varchar(100),ServicePack varchar(100))"
#/////////////////////////////////////////////////////////
<#Setting up object variables
	to be used for AD lookup#>
	$TIME = [datetime]::today.adddays(-$DAYS)
$LOOKUP = (Get-ADOrganizationalUnit -LDAPFilter "(Name=$ORGUNIT)")
	$ADARRAY = (Get-ADComputer -SearchBase $lookup.DistinguishedName -properties `
			name,passwordlastset,operatingsystem,operatingsystemservicepack `
			-Filter 'passwordlastset -ge $TIME')  
<#Setting up object variables
	to be used for AD lookup#>
	$TIME = [datetime]::today.adddays(-$DAYS)
$LOOKUP = (Get-ADOrganizationalUnit -LDAPFilter "(Name=$ORGUNIT)")
	$ADARRAY = (Get-ADComputer -SearchBase $lookup.DistinguishedName -properties `
			name,passwordlastset,operatingsystem,operatingsystemservicepack `
			-Filter 'passwordlastset -ge $TIME')  
<#Connect and cleanup the AD table
	Connection remains open for writting#>
$SQLCON = New-Object System.Data.SqlClient.SqlConnection("Data Source=$SQLSRVR; `
			Initial Catalog=$DB;Integrated Security=SSPI")
	$SQLCON.open()
		$SQL = $SQLCON.CreateCommand() 
			$SQL.CommandText ="DROP TABLE $TABLE"
				$SQL.ExecuteNonQuery() > $null
		$SQL.CommandText = $CREATE
			$SQL.ExecuteNonQuery() > $null
<#Begin loop through the ADARRAY for
	Variables and begin inserting Rows to table#>
	$X = 0
ForEach($OBJECT in $ADARRAY){
	
	$NAME = $OBJECT.Name
		$OU = $OBJECT.DistinguishedName.ToString().Substring($OBJECT.Name.ToString().Length+4)
			$DN = $OBJECT.DistinguishedName
				$PWDLS = $OBJECT.PasswordLastSet
					$OS = $OBJECT.OperatingSystem
						$SP = $OBJECT.OperatingSystemServicePack
						
#\\\\Any Table Data to be written goes here:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
	$INSERT = "INSERT $TABLE VALUES ('$NAME','$OU','$DN','$PWDLS','$OS','$SP')"
	$SQL.CommandText = $INSERT
#////////////////////////////////////////////////////////////////////////////////////////	
		$SQL.ExecuteNonQuery() > $null
$X = $X + 1			
}
"$X Records written to $TABLE in database $DB on server $SQLSRVR"
<#Cleanup variables and close connections#>
$SQLCON.Close()

 

Overview: 

We have 5 key variables to work from here:

  1. $DB – Name of Database to write to
  2. $SQLSRVR – Name/IP of the SQL server
  3. $TABLE – Table to build this data to
  4. $DAYS – Age of machines to pull from AD
  5. $ORGUNIT – The base OU to search through recursively

 

These of course are set to preference or need.

Now what the script does is, it figures out the date using the datetime function in .net factored by the number of days back you declare.

Then it looks up and stores the base OU we will search through then builds the object with the AD Objects it finds.

Next it instantiates the SqlClient object and makes a connection to the server and database we specified in our variables.  We drop the table specified, then recreate the table.

Now the real work begins, we loop through the AD Objects one by one and cherry pick the specific properties out that we want from the object (an area that PowerShell excels at I might add) and declare them as variables to be finally written to our database table.

And for good measure we count each loop and store that so we can see how many records were found and attempted, then cleanup all our variables used during the script.

So in my last two posts I was discussing how to build a script for forcing a SCCM client inventory and how to build a custom collection for them to deploy to.

There is still one more problem, as I stated in the first post, you need to run the script against the proper interpreter.  As more and more people begin to adopt Windows 7 x64 into their environments this issue will become more and more prevalent.  Fortunately there is a simple solution.  Build in your package source directory multiple batch files that contain the following in them:

if exist %systemroot%\SysWOW64\cscript.exe goto 64
cscript "inventory.vbs /full"
exit
:64
%systemroot%\SysWOW64\cscript.exe "inventory.vbs /full"
exit

You will want to build one for each variant of the script you will be running.  This also eliminates the need of creating double of every package>program and prevents you from having to do any advertisement restrictions, it will all be managed at run time.  Modify lines 2 and 5 of your bat file for whatever you need to be run and you are all set.

Finally, one more note of interest.

When building your advertisement, make sure the data is being downloaded before execution (default setting).  For good measure make sure it’s also enabled for unprotected distribution points under the advertisement schedule page.

download

As far as the command line is concerned it just needs to be:

batfilename.bat

That’s it for now. 


Also, I plan on posting a guide for building a Powershell script you can schedule to pull AD information to a SQL database which you can leverage for quick information gathering. Until then, have a good one.

Ok, so as I discussed in my previous post I built multiple collections that determine client inventory health.

So the end result is the ability to build collections based off of machines who haven’t reported to one or more inventory types for X number of days and have the script deployed to them auto-magically with the corresponding inventory type.

This is one of the areas where SCCM (from the console level) becomes pretty restrictive.  In MSSQL you have access to a couple of functions that make determining time frames very simple.  In this case DATEDIFF and GETDATE which combined provide a nice simple statement for determining a fixed time frame.

DATEDIFF(d, column-to-check, GETDATE()) >= 30

 

Ok, great.  So what’s the problem?  Well when building collections in SCCM you have to use WQL, and it doesn’t have such a robust function set.  Fortunately there is a way around this, and I’m going to give a few SQL queries you can use to build some inventory health collections.

Start by building a new empty collection, and during the creation process make a standard query rule and save it.  Now right click and go to properties of your new collection and copy it’s Collection ID.

Now, open up your sql management studio and connect to your site server DB.  Make sure you are connected to the proper DB and run the following query:

select QueryExpression from v_CollectionRuleQuery
where CollectionID = '(pasted collection id)'

 

What you should see is the SQL being used on the back end to present to your console.  Now what you need to do is update this column to match whatever query you need.  Bear in mind, when you are redefining things inside this query they need to match the WQL schema for the console to get the information it needs.  I’ll show you examples later.  I also want to point out, this is not supported by Microsoft, and if you don’t properly structure your Update statement you could end up hosing all your collections, so be safe.  Now here is how you do it.

Update v_CollectionRuleQuery
Set QueryExpression = ‘whatever query statement you want’
where CollectionID = ‘(pasted collection id)’

 

Now when you update membership and refresh that collection you should see results to match your query.  Ok, now here’s the kicker.  Never modify this collections query properties from inside SCCM, the changes you made will be overwritten if you do.

Now, about those queries I promised you.


--No Inventory for 30 days
SELECT SMS_R_System.ItemKey, SMS_R_System.DiscArchKey, SMS_R_System.Name0, 
   SMS_R_System.SMS_Unique_Identifier0,SMS_R_System.Resource_Domain_OR_Workgr0,
			SMS_R_System.Client0 
			FROM
				System_DISC AS SMS_R_System 
					JOIN softwareinventorystatus AS sw 
						ON 
					SMS_R_System.ItemKey = sw.ClientId 
						JOIN WorkstationStatus_DATA AS wks 
							ON
						wks.MachineID = sw.ClientId 
			WHERE
				DATEDIFF(dd,sw.LastUpdateDate,GETDATE()) > 30 
					AND
				DATEDIFF(dd,wks.LastHWScan,GETDATE()) > 30


--No Hinv 30 days
SELECT SMS_R_System.ItemKey, SMS_R_System.DiscArchKey, SMS_R_System.Name0, 
		SMS_R_System.SMS_Unique_Identifier0,
			SMS_R_System.Resource_Domain_OR_Workgr0, SMS_R_System.Client0 
		FROM System_DISC AS SMS_R_System 
		JOIN WorkstationStatus_DATA AS wks 
				ON 
			 wks.MachineID = sms_r_system.ItemKey
		WHERE 
			DATEDIFF(dd,wks.LastHWScan,GETDATE()) > 30


--No Sinv 30 days
SELECT SMS_R_System.ItemKey, SMS_R_System.DiscArchKey, SMS_R_System.Name0, 
		SMS_R_System.SMS_Unique_Identifier0, 
			SMS_R_System.Resource_Domain_OR_Workgr0, SMS_R_System.Client0 
		FROM 
		System_DISC AS SMS_R_System 
			JOIN softwareinventorystatus AS sw 
				ON
			SMS_R_System.ItemKey = sw.ClientId 
		WHERE 
			DATEDIFF(dd,sw.LastUpdateDate,GETDATE()) > 30


--Client with no DDR
SELECT SMS_R_System.ItemKey, SMS_R_System.DiscArchKey, SMS_R_System.Name0, 
		SMS_R_System.SMS_Unique_Identifier0,
			SMS_R_System.Resource_Domain_OR_Workgr0, SMS_R_System.Client0 
			FROM System_DISC AS SMS_R_System 
				JOIN softwareinventorystatus AS sw 
					ON
				SMS_R_System.ItemKey = sw.ClientId
					JOIN WorkstationStatus_DATA AS wks 
						ON 
					wks.MachineID = sw.ClientId 
			WHERE 
				DATEDIFF(dd,sw.LastUpdateDate,GETDATE()) > 5 
					AND 
				DATEDIFF(dd,wks.LastHWScan,getdate()) > 5 
					AND 
				SMS_R_System.Client0 = 0


If you need to specify a collection for exclusion (which if you build all of these you may want to exclude machines in the NO inventory collection) append this to the end of your query.

						and
					SMS_R_System.ItemKey not in (select ResourceID from v_cm_res_coll_CollectionID)


 

And there you have it.  When writing your update line I would change these sql queries to be one line and remove the comment obviously.  There is also a LimitToCollectionID column you might want to apply any specific restrictions to.

Hopefully these past two posts help a few people looking to take an automated stance to proactively resolving inventory reporting issues on clients.  For anyone else:

  1. Hopefully you learned something about sql?
  2. You got a nice refresher on the collection query views name?

 

Have a good one.

Update: Code has been re-written from the previous post to make full use of WMI as the means of execution, and scripting dictionary to simplify reporting and controls

Alright, so I’ve got a bit of a problem.

A large volume of data_loader kick backs, and resynch requests.  The problem is, for one reason or another, the clients inventory data is being held up.  Sure, plenty of machines are only missing records for maybe 3, 4, or 5 days.  However I’ve started to see machines that aren’t getting a record in for about 15 days or more.

This shall not stand!  Mind you I’m not going to fix these one by one either.  So what now?

Script!

So I begin to dive into the CCM namespace looking for a method to force a full inventory. Which brings me to the InvAgt class.  Great!  But wait, I don’t see any method for just forcing an inventory.

This is no longer the object used in the script, but leaving here for reference:

I’ll save you the rest of the story and just get to what I found.  According to Technet, the best way to “script” the solution is to delete the InventoryActionID from the InventoryActionStatus in WMI and use the CPApplet COM automation class to invoke the rescan.

They were also nice enough to provide their own code so it was made increasingly more simple to build the following script as I already had the core functionality objects defined for me.  Now, I needed to weaponize code for deployment…. Perhaps weaponize isn’t the best term, but I needed it to deploy and do some work via SCCM.

So I decided to allow 4 arguments and 2 switches that would allow me to perform:

  1. Hardware Inventory (hinv)
  2. Software Inventory (sinv)
  3. Discovery Data (ddr)
  4. File Collection (file)

 

I also wanted to be able to fire this off in 2 methods, one to clear the old inventory instance ID information and force a full inventory or just force an inventory.  I also wanted to be able to run this in a verbose mode to check and see if all actions were being completed as requested.  So I added the following switches:

  1. /full
  2. /debug

 

So the end result is the ability to build collections based off of machines who haven’t reported to one or more inventory types for X number of days and have the script deployed to them auto-magically with the corresponding inventory type.

I also set the script so someone could use it ad-hoc on a machine by double-clicking.  The default behavior is to NOT delete the IDs but to initialize all 4 of the inventory calls.  I also provided CONST declarations at the top so anyone could add or remove what they want in there (they will however need to modify the arrays arrinvtypes and arrkeys to match as well as the select case statement to add additional arguments).

For people that don’t give a hoot about SCCM (there are plenty of you, shockingly enough) feel free to examine the way the loops and conditionals are being used in this script.  They coorelate back to the fundamentals I have covered previously in my stick to the script series.

  1. Stick to the script…
  2. Stick to the script #!/BIN/BASH…
  3. Stick to the script CreateObject(“Wscript.Shell”)…
  4. Stick to the script PS C:> PowerShell…

 

Finally, the code…..

'||============================||
'||Author Daniel Belcher       ||
'||Full HInv                   ||
'||Date 9/22/2011			   ||
'||Updated 11/7/12             ||
'||============================||
'==================================================================================
'Usage: For any prefered deployment method
'This script can:
'	1) Initiate Soft, Hard, Discover, File Collection on a client PC
'	2) Clear agent inv cache and perform full collection
'
'	cscript.exe inventory.vbs
'
'It's possible to target single inventory types or cherry pick them as needed.
'Accepted arguements are:
'1) hinv (performs a hardware inventory)
'2) sinv (performs a software inventory)
'3) ddr  (performs a discovery data)
'4) file (performs a file collection)
'
'Accepted switches are:
'1) /full (blows away all inv cache for declared inventories and initiates request)
'2) /debug (runs the script verbose to check things are running)
'Using /full or /debug it's best to target an inventory type, but not required.
'
'All arguements and switches are case insensitive.
'
'|Objects, Variables, and Constants ***********************************************
'==================================================================================
'Constant strings for cleanup and initialization subroutines.
'==================================================================================
Option Explicit
Dim oWShell,oWmiLoc,oDict,Args,sArgs,nArgs
Dim Item,x,Full,DEBUGMSG
	Set Args 	= Wscript.Arguments
	Set	NArgs 	= Wscript.Arguments.Named
    Set oWShell = CreateObject("WScript.Shell")
    Set oWmiLoc = CreateObject("WbemScripting.SWbemLocator")
    Set oDict 	= CreateObject("Scripting.Dictionary")
'Add additional INV types here with IDs and update Case & dictionary key item pairs
Const sHinv = "Hardware Inventory Collection Cycle"	
	CONST Hinv = "{00000000-0000-0000-0000-000000000001}":oDict.Add sHinv, Hinv
Const sSinv = "Software Inventory Collection Cycle"
	CONST Sinv = "{00000000-0000-0000-0000-000000000002}":oDict.Add sSinv, Sinv
Const sDDR  = "Discovery Data Collection Cycle"
	CONST DDR  = "{00000000-0000-0000-0000-000000000003}":oDict.Add sDDR,  DDR
Const sFile = "Standard File Collection Cycle"
	Const File = "{00000000-0000-0000-0000-000000000010}":oDict.Add sFile, File
'|Main Run ***********************************************************************
'=================================================================================
'Checking for Named arguments
If NArgs.Exists("Full") Then
	Full = TRUE
		Else
	Full = FALSE
End If
		if NArgs.Exists("debug") Then
				DEBUGMSG = TRUE
					Else
				DEBUGMSG = FALSE
		End If
'Check for Unnamed arguments
If Args.Unnamed.Count > 0 Then
	If DEBUGMSG then wscript.echo "Unnamed arguments Found..."
	'Looping arguments looking for declared Inv types
	For Each item In Args
		Select Case lcase(item)
			Case "hinv"
				If Full Then
					DelInvActionID sHinv
						wscript.sleep 1500
				End If
					RunInvAction sHinv
			Case "sinv"
				If Full Then
					DelInvActionID sSinv
						wscript.sleep 1500
				End If
					RunInvAction sSinv
			Case "ddr"
				If Full Then
					DelInvActionID sDDR
						wscript.sleep 1500
				End If
					RunInvAction sDDR
			Case "file"
				If Full Then
					DelInvActionID sFile
						wscript.sleep 1500
				End If
					RunInvAction sFile
		End Select
	Next
else
	'If no Unnamed arguments then loop through appropriate arrays
	'based on /full or not
	If DEBUGMSG Then wscript.echo "No Unnamed Arguments found, " _
	&"checking for full switch..."
	If Full Then
		If DEBUGMSG Then wscript.echo "/Full switch used, " _
		&"clearing caches..."
		For Each item in oDict.Keys
			DelInvActionID item
				wscript.sleep 1500
		Next
	End If
		If DEBUGMSG Then wscript.echo "Initiating HINV, SINV, DDR, " _
		&"and File Collection now..."
	For Each item in oDict.Keys
		RunInvAction item
			wscript.sleep 1500
	Next
End If
Report
Wscript.Quit(0)
'SubRoutines *********************************************************************
'=================================================================================
'Sub to delete the InventoryActionID as requested
Sub DelInvActionID(guid)
		On Error Resume Next
	Set oInvAgt = oWmiLoc.ConnectServer(,"root\ccm\invagt")
			oInvAgt.Delete "InventoryActionStatus.InventoryActionID='"& _
				oDict.Item(guid)&"'"
	If DEBUGMSG Then wscript.echo "Clearing "& _
		oDict.Item(guid)&" from InventoryActionStatus"
End Sub
'*********************************************************************************
'Sub to initialize Inventory Cycle requested
Sub RunInvAction(name)
		On Error Resume Next
Dim smsclient:Set smsclient=GetObject("winmgmts://./root/ccm:sms_client")
	smsclient.triggerschedule(oDict.Item(name))
		If DEBUGMSG Then wscript.echo "Running "&name
End Sub
'**********************************************************************************
'Sub to report script completion if /debug
Sub Report
         If DEBUGMSG Then wscript.echo "Complete" 'Set for popup on script complete
End Sub
'End ******************************************************************************
'==================================================================================

Alright, so last time I closed out my(sql) post I promised some more advanced queries etc.  Well let’s first start by playing around with some basic query syntax.

Think back to what I wrote about it being a logical grouping of data like a spreadsheet.  So surely we can pull specific data right?  Of course, this is where sql shines, and it couldn’t be simpler.

select user_id0 from temp.test_users; 

 

Pretty simple right?  Ok, well let’s grab user id’s and their location, that should also be simple enough.

select user_id0, location from temp.test_users;

 

Now how about just the user’s located in Birmingham, al?

select user_id0, location from temp.test_users 
	where location like 'birmingham%';

 

Ok, let’s take a moment now and examine what we’ve just covered here in such rapid succession. These are simple select statements, and the primary data retrieval method in SQL.

select <column, names> from <server.database.table> 
	[where <conditions of the search>]

 

You can use * for column names to select them all, and using where is not required for your statements, but advised for a precise query.  Couldn’t be easier could it?  A cool little tidbit too is using as in your query to modify your column names, using our previous select for user id’s and location lets do it again and rename our columns User ID and Place:

select user_id0 as 'User ID', location as 'Place' 
	from temp.test_users 
where location like 'birmingham%';

 

What?  That’s it?  Pretty nifty huh?  Ok, let’s turn the difficulty up a little bit now and join tables for a cross reference.  Ok, I built a bogus phonebook table with similar data to our test_users table.  In that table I have some phone numbers, but not the full name of the users.  I want to pull the users full name and phone number, here’s how I could do that:

select test.last_name, test.first_name, phone.number 
	from test_users as test 
left outer join phonebook as phone 
	on 
phone.userid = test.user_id0;

 

Ok, you can put the paper sack away, and breathe easy.  That’s not nearly as complex as it looks or sounds.  Let’s break it down piece by piece and examine it real fast.  First we will look at the output:

+-----------+------------+--------------+
| last_name | first_name | number       |
+-----------+------------+--------------+
| belcher   | daniel     | 205-111-1111 |
| rich      | derrek     | 813-111-1113 |
| george    | amanda     | 205-111-1112 |
+-----------+------------+--------------+

 

In our select statement we see column last_name, first_name, and number.  We examined earlier about using ‘as’ to define a name, and that’s what we have done during that select statement.  We’ve named the table test_users as test and phonebook table as phone.

So, before defining the column name we are selecting, we have to identify FROM which table.  In this case; test.last_name, test.first_name, phone.number

Now, let’s examine the data I put into phonebook to better understand how this join is working.

+--------+---------+----------------+--------------+
| userid | last    | location       | number       |
+--------+---------+----------------+--------------+
| dbel   | belcher | birmingham, al | 205-111-1111 |
| ageo   | george  | birmingham, al | 205-111-1112 |
| drich  | rich    | tampa, fl      | 813-111-1113 |
+--------+---------+----------------+--------------+

 

Ok, as you can see, the userid column is identical to our user_id0 from test_users which makes it a prime anchor point for our join. So we perform a left outer join on test.user_id0 and phone.userid.

Now wasn’t that easy?

Finally we are going to look at a subquery.

select user_id0, last_name, first_name 
	from test_users 
where location in ('birmingham, al') 
order by user_id0;

 

This should prove very simple to read.  If the data stored under location matches birmingham, al then it is pulled.  You could easily add more criteria to the subquery by adding it along with a comma to separate the value.  So; where location in (‘birmingham, al’,’tampa, fl’)

There are other ways to achieve similar outcomes using OR and AND, but this is a much cleaner and shorter method for doing it.

That’s it for now, perhaps I’ll have another sql post with even more interesting sql techniques in the future.

Alright, so lately at work I’ve been up to my eyeballs in SQL.  More specifically MSSQL, but SQL none the less.  So, I wanted to take some time to discuss it and some basic stuff that, even home users, might find useful.  Considering this is a wordpress blog and it uses mysql to store it’s post and configuration data, I’ll use mySQL examples for my code snippets.  Now let’s get started by creating our database.

create database temp;

 

Now let’s create a table for users.

create table temp.test_users
(User_ID0 char(50),
last_name char(50),
first_name char(50),
location char(50));

 

Let’s stop and break this down for a second.  We just created our first database and named it temp.  Now inside of that database we’ve just created a table with 4 columns with a limit of 50 chars per entry.

So what is that exactly? 

Well, a database is just a logical grouping of data.  One easy way to think of a database is to think of a spreadsheet.  We’ll use excel as our example.

The database is your workbook, and a table is a worksheet.

So we’ve got a table now, lets insert some data to play with in there.

insert into temp.test_users
(User_id0,last_name,first_name,location)
VALUES
('dbel','Belcher','Daniel','Birmingham, AL');
insert into temp.test_users
(User_id0,last_name,first_name,location)
VALUES
('drich','Rich','Derrek','Tampa, FL');
insert into temp.test_users
(User_id0,last_name,first_name,location)
VALUES
('ageo','George','Amanda','Birmingham, AL');

 

Lets see how that data looks?

select * from temp.test_users;

 

+----------+-----------+------------+----------------+

| User_ID0 | last_name | first_name | location       |

+----------+-----------+------------+----------------+

| dbel     | belcher   | daniel     | birmingham, al |

| drich    | rich      | derrek     | tampa, fl      |

| ageo     | george    | amanda     | birmingham, al |

+----------+-----------+------------+----------------+

 

Hmm, it would be great if we had a date stamp along with this data for a creation data footer.

alter table temp.test_users
add date0 char(50);

 

Now, how are we going to update these entries?  I mean we could

delete from temp.test_users;

 

And recreate all those previous insertions with a new date column.  But that just isn’t efficient, especially if we had an even larger table than this.  It would be nice if we could look through our selects and just append that data.  But wait, we can…

update temp.test_users
SET date0 = CURDATE()
where date0 is null;

 

Pretty neat huh?  Well, maybe, let’s see how our data looks now.

select * from temp.test_users;

 

+----------+-----------+------------+----------------+------------+

| User_ID0 | last_name | first_name | location       | date0      |

+----------+-----------+------------+----------------+------------+

| dbel     | belcher   | daniel     | birmingham, al | 2011-08-31 |

| drich    | rich      | derrek     | tampa, fl      | 2011-08-31 |

| ageo     | george    | amanda     | birmingham, al | 2011-08-31 |

+----------+-----------+------------+----------------+------------+

Alright, now there goes the basics of building, updating, and viewing a (my)SQL database and table.  Really simple huh?  I’ll try and post some slightly more advanced stuff in the next week.  Like joining tables, and creating subqueries.

I guess until then you can RTFM (read the flippin manual).

On a more regular basis I’d like to keep a stream of technical write ups, gaming news, theological thoughts, and or general “what’s going ons” with me and my family.  However with a work trip to Houston last week and general slap busy nature of my work since returning home; I’ve not had any time to collect some thoughts and formulate them into a blog post.  I want to hit some high points, and perhaps elaborate on them more in future posts.

High point #1 Samba DC

Ok, so people who have known me for any extended amount of time (from the age of 16 to 30) knows that I’m a Linux fan.  My work and lively hood mind you thrive around a Microsoft world, but I will never sell Linux short, nor fail to marvel at the amazing things that a thriving community of passionate individuals can create.  I also maintain a Linux server out of my home to manage DNS, DHCP, VOIP (TeamSpeak) and File sharing (NFS, iSCSI, and SMB).  I will also, on occasion, bring up outward facing game servers.  Just recently I decided to convert that server into a SAMBA DC for my, primarily, Windows 7 environment at home.

I run CentOS as my server distribution, which is a downstream of RHEL.  I’m running Samba version 3.5.4, at the time of this writing 3.6 is the latest stable release but didn’t offer enough improvements for me to go outside of my natively distributed yum version.

Also, aside from a few changes to the registry and local security policy that had to be made on the client side of the machines, the migration was fairly painless.

The first change resolves the issue of Windows 7 being able to find the domain for insertion, and the security policy solves the issue of Domain Trust at login.  It’s also wise to disable the password reset of the machine to DC to avoid potential relationship issues.  I’d not seen this issue myself, but until I see a confirmation it’s resolved (supposedly coming in samba 4) I’ll err to the side of caution.

My next step will be to integrate Open LDAP functionality into the DC, and an Apache http server.  I assume these will be fairly painless projects, but for risk of breaking my current domain environment I’ll need to wait till I have the time to deal with a potential ldap migration failure.  I also don’t have a strong enough list of pros for it since this is just a home network.  Mind you it’s more sophisticated than the average home network, it just seems a bit over engineered.  As for the Apache server, I really want to get back into some web development so I’d like the internal server for development purposes….

 

service httpd start

Ok, so now I’m running an Apache server off my server as well.  Linux is so hard.

 


 

High point #2 Admin Studio 10

So I was in Houston last week.  I’m now “officially” trained to use Admin Studio 10 for package (msi, app-v, xenapp, and thinapp) development, repackaging, and migration.

So what does that mean?

Well as most of you know I work with a product from Microsoft called SCCM.  One of the primary features of SCCM is application deployment.

So what is application deployment?

Simply put, it’s installing applications to multiple machines over a network.

Ok, I think I see.  So why would you need to do package development to deploy packages?

Well, you don’t have to.  One could feasibly shoehorn an installer given by a vendor, but ideally you want to build out a standardized installer or load for your company.  For us that means I’ll be building MSIs, MSTs, and App-v packages.  As well as isolating application installs that might otherwise break functionality of OTHER applications they share hard drive space with.

Wait, what?  Isolate, break, huh?

Almost all applications rely on libraries.  Think of them as a set of shared instructions that applications go to when asked what to do.  Well in most cases these libraries are shared by multiple applications.  And, sometimes one application wants a vanilla library, and another wants a chocolate.  Well these apps will fight, and one of them will win and another one will lose.  By isolating them I can give them what they want so they don’t break the system, or each other.

Our company will also leverage App-v packages which are essentially virtualized installs of these applications that, although they run locally on the machine, they are actually virtualized (or encapsulated) and are separate from the actual operating system.  Xenapp and Thinapp do the same thing.  I’m particularly excited about application virtualization, it can come with a bit of overhead, but it’s nice and contained.

Ok, I stopped caring somewhere around chocolate and vanilla.

Yea I figured as much.  Either way, it is a tangible notch to my hard skill set and I’m glad that I was able to get it done.

 


 

 High point #3 Gospel in Life

What does a Gospel centered life look like?

What does it mean to be in the world but not of the world?

Is the Gospel as narrow minded to culture as people often proclaim it to be?

What does a Gospel centered community look like?

These are part of the current bible study I’m involved in with my brothers and sisters in Christ called Gospel in Life by Timothy Keller. It’s a great study that forces you to take a look at your heart, your life, and your community and compare it to what and how it is defined in the Gospel. I would recommend this study to anyone who is a believer. Even if the information isn’t new to you, as most of it hasn’t been for me, it’s still food for the soul. A reminder of the higher purpose we are called to as Christians.

Truthfully, I’d encourage non-believers as well to read this study. If for nothing else, than to hold Christians accountable to the teachings that we claim to believe.

 


 

High Point #4 Ignoring my Family

I’ve taken way to long to blog this, and my wife has informed me that I should blog about how I’ve ignored my family, to blog.

When she’s right she’s right.  Thank God for her gentle reminders.

 


As a Systems Engineer/SCCM Administrator I spend a lot of time parsing through data, and assisting support technicians in tracking down failing assets.  Now mind you, I have plenty of reports that give me the information I need to identify the machine and users and techs responsible etc, but what happens when I get a random list of employee names from a project manager that has 0 access to user ids or asset numbers for machines?  Well, I have to find that information, then spend time later pointing them to resources I’ve made available for them; but that’s another topic….

Anyway, I face both problems, I’ll receive a list of userids or usernames and have to resolve them one against another.  Well thanks to powershell I’m able to do so quickly and easily through profile functions.  Now, I’ll explain the benefits of profile functions after the code below:

 


Import-Module activedirectory

#-------------------------------------------------------------------- 
Function Get-UserName { 
[CmdletBinding()]

PARAM($USERID) 
Get-ADUser $USERID | select name 
} 
Set-Alias gun Get-UserName 
#-------------------------------------------------------------------- 
Function Get-Userid { 
[CmdletBinding()] 
PARAM([string]$NAME) 
$NAME = $NAME + "*" 
    Get-ADUser -Filter {Name -like $NAME} | select samaccountname,name 
} 
Set-Alias guid Get-Userid 
#--------------------------------------------------------------------

 How do I use profile functions?!?

Powershell, much like the BASH shell in Unix/Linux, has a profile “script” so to speak at startup.  There is a global one found at:

  • %WinDir%System32WindowsPowerShellv1.0Profile.ps1
  • %WinDir%System32WindowsPowerShellv1.0Microsoft.PowerShell_Profile.ps1
  • %WinDir%System32WindowsPowerShellv1.0Microsoft.PowerShellISE_Profile.ps1

The same filename syntax is used for the user profile versions:

  • %UserProfile%My DocumentsWindowsPowerShellProfile.ps1
  • %UserProfile%My DocumentsWindowsPowerShellMicrosoft.PowerShell_Profile.ps1
  • %UserProfile%My DocumentsWindowsPowerShellMicrosoft.PowerShellISE_Profile.ps1

See a pattern?  Simple enough right?  None of these profiles exist by default though, they must be created.  The names are fairly indicative of what they control, but here’s a breakdown:

  • Profile.ps1
    • This governs startup of both the standard powershell, and the ISE.
  • Microsoft.PowerShell_Profile.ps1
    • This governs startup of the standard powershell console only.
  • Microsoft.PowerShellISE_Profile.ps1
    • This governs startup of the ISE powershell console only.

Simple enough right?  Now, for the sake of simplicity, lets build a current user version of the profile.ps1 and save the above code to it.  Make sure you’ve installed the activedirectory cmdlet module provided with windows 7. Now launch powershell and viola you should now have the cmdlets:

  • Get-UserName
  • Get-UserID

and their aliases:

  • GUN
  • GUID

Ok, now what?

Here’s the thing about profile functions.  You can treat them like cmd-lets now.  That also means that you can script against them.  Consider them a static variable for every powershell session that you have configured with this profile.

Pretty cool huh?  One of the most powerful features of the shell is it’s configurability, and profile functions and aliases are the tip of that spear.

In the case of user name capture, or id capture, I’m but a simple gc and for-each statement away from processing the list given to me.

I hope this helps broaden your practical understanding of profiles, and gets your creative juices flowing for building your own administrative tool kits.  Happy scripting.

I haven’t written a scripting post in a while, but I’ve wanted to.  So in keeping with the spirit of my stick to the script posts lets look at something that is common among all scripting languages (even if the syntax isn’t).

Let’s talk about strings…….

 kitty-yarn

Awwww, but no.  These kind of strings.  In the case of scripting, I think the best way to think about it is, text, what you are reading or able to read.  They aren’t used mathematically (usually), but can and will be a huge component in your scripting.  Especially when automating things around a desktop or server environment.

Oh really?  Yes, really.  General uses for strings in a script are:

  • User messages
  • Reporting or logging
  • Comparisons
  • Explicit paths
  • Application execution
  • Conditionals

Ok, so maybe that list doesn’t look that impressive, but when you consider how much of that is done within a script, it becomes obvious the importance of string values to scripting.  It’s also important to recognize that in certain scripting environments, it’s important to define a string value as such so that it can be properly used. 

(Powershell for instance, requires you to properly define a value type to use the relevant functions… but I’m getting ahead of myself.)

So wait?  There are more than strings in a script?  Yes; Strings, Integers, and Booleans are your standard value types. Integers are numbers (math!) and Booleans are True or False.  So given those value types, perhaps it is a bit more obvious how frequently you will use string values?

So lets get into some sample code and evaluate strings some shall we?


VBScript:

strTest = "Hey, I'm a string value"

wscript.echo strTest
'Shows the string value
wscript.echo strTest&", and I've been concatenated to the value."
'& operator joins values
wscript.echo lcase(strTest)
'Lower case
wscript.echo ucase(strTest)
'Upper case
wscript.echo strReverse(strTest)
'Reverses the string
wscript.echo len(strTest)
'Gives the total length of a string
wscript.echo mid(strTest,10,8)
'Returns fix number of characters in string
wscript.echo left(strTest,11)
'11 chars from the left
wscript.echo right(strTest,13)
'13 chars from the right
'----------------------------------------------------------------
wscript.echo inStr(strTest,"a")
'Returns position of string from left to right
wscript.echo inStrRev(strTest,"a")
'Returns position of string from right to left
'----------------------------------------------------------------
a = split(strTest)
'Splits strTest by it's spaces
for each item in a
    wscript.echo item
   'echoes each dimension from the split array
next

wscript.echo a(3)&" "&a(4)

'echoes the split arrays dimensions 3 and 4
'---------------------------------------------------------------

strReturn=inputbox("Here, you try!")

if strReturn = "" then
   wscript.echo "Fine, don't play along"
else
    wscript.echo "So you said: "&trim(ucase(strReturn))&vbcrlf _
    &"Sheesh, no need to shout!"
end if
'---------------------------------------------------------------


Running the above script will give you a better understanding of what I’m about to explain.  I wanted to show some common functions in vbscript (syntax is different but these will be universal functions you will use).  The above are common string manipulation tools

 

Code explained… line by line

First we are defining our string to a variable strTest.  Now “in the wild” as it were, this string could be pulled from an object property, read from a file, registry, user input, output from another application, etc.  It’s best to define a string to a variable though, no matter the method for input.  This of course is the most direct way to do it for our example

Now we begin with the simplest string usage, output.

Now we raise the stakes a bit by joining an additional string value to our current strTest.  This action is known as concatenation.  This is a very common thing with string usage and manipulation.  Building complex values/messages/logs from various predefined and/or dynamically pulled string values.

Our next two examples have to do with manipulating case between upper and lower.  This is fairly self explanatory, and in the interest of string comparisons it’s usually a good practice (and often necessary) to force a case set, especially if the comparison function is case sensitive.

String reversal, this may not seem important initially, but makes a huge difference when you are forced to chop strings up.  The ability to reverse a string can go a long way for string chopping.  Especially if you are dealing with filenames.

The length function is another that may seem arbitrary to some, but allows for great flexibility as well in chopping up strings such as file names.  If you have a fixed number of characters to remove it’s sometimes simpler then splitting the string.  (so I wasn’t completely honest about the math stuff and strings)

Mid, Left, and Right.  These 3 can be used in conjunction with length to return a fixed number of characters from the left right, or middle (specified) of a string.  Here’s a quick easy example:

test="I am 18 chars long"
wscript.echo test
count=len(test)-4
wscript.echo right(test,count)

Very simply, we take the total length of Test and subtract it by 4, then return the sum of remaining characters from right to left of the original string.  In this example we, had a fixed value to subtract, be mindful you could use the length value of multiple strings to achieve the same type of results.

Now, InStr and InStrRev, or “In String” and “In String Reverse”.  These two functions make for great conditionals.  They, along with strComp, are excellent for determining like strings and taking action.  Especially when parsing through files and directories looking for specific returns. 

One of my favorites, especially in PowerShell, split.  Split takes a string, looks for a delimiter (space by default) and breaks the string up into an array.  Why do I like it so much?  Put simply, it allows you to quickly whittle down long path names into a single filename.  It also allows you to quickly and efficiently modify lists of data into manageable formats.  And last but not least, it can easily turn files like csv’s into an array for manipulation.

Finally, user input.  This is pretty self explanatory.  Prompt for input, receive and control input, use input.


In PowerShell, string functions are called like this:

$a=”This is a string value”

$a.ToString()

$a.ToUpper()

$a.ToLower()

$a.Replace(“a “,”my “)

$a.split(“ “)

$a.contains(“string”)

$a.StartsWith(“t”)

$a.EndsWith(“e”)

$a.Length

$a.CompareTo(“this is a string value”)

$a.Length.Equals(22)

$a.CharArray()

$a.PadLeft(“30”)

$a.PadRight(“30”)

$a.Trim()

$a.TrimEnd()

$a.TrimStart()

Given the previous examples in vbscript, you should be able to easily adapt your knowledge to using these in PowerShell.  The idea and purpose is still the same, again, the syntax is just different.


String manipulation inside Bash is, admitedly, a bit more convoluted so I won’t be touching on it in this post.  However I’d highly recommend an online source like: Mendel Cooper’s guide.  Again, the methodology will still be fairly the same, but the syntax will differ.  The largest issue with Bash is the myriad of ways of performing the string manipulation.

 

Anyway, I hope this has been informative for you.  Good luck, and happy scripting!