In my last article, I introduced the Practical PowerShell series. When working on PowerShell scripts, there might come a point where a set of instructions repeats code elsewhere in the script. It is also possible that you might want to incorporate code from elsewhere into your script so that you can easily call the code.
This description might remind you about cmdlets, which have names and, optionally, one or more parameters to control their operation. But what if you want the same thing for your code, like some code with a high reusability factor? Welcome to the world of PowerShell functions. Before we get too deep, let’s define exactly what we are talking about.
Scripts
A script is a text file with a .ps1 extension containing PowerShell code. The code consists of cmdlets and (optionally) functions. You can call scripts in various ways:
- Using the ampersand (& .\Process-Something.ps1). The code runs in a child session with its scope. This means any definitions, such as variables or functions in the script, disappear when the script terminates. When you run interactive PowerShell scripts, you usually omit the ampersand, but when you want to run code pointed to in a variable, you must use this invocation method, e.g., & { Get-ChildItem }
Make sure to understand the difference between using the ampersand at the start and end of a command. Using an ampersand at the end instructs PowerShell to run the code in a background job. - Using dot-sourcing (e.g. .\Helper-Functions.ps1)., the code runs in the current PowerShell session. This means that any variables or functions you define in the script become available in the session. If these definitions existed before, they would simply be overwritten.
The ability to run PowerShell scripts depends on the local machine’s current execution policy. This is a security measure to prevent malicious scripts from running. Scripts from Microsoft are generally signed, but many community-sourced scripts downloaded from the internet are usually not. You may need to run Set-ExecutionPolicy unrestricted before you can run scripts created by others, provided company policies do not prevent you from modifying the execution policy.
Reusable Code
A function is a set of reusable code incorporated into a script. To identify the code, you must name it, which you specify after the command Function. Using PowerShell conventions, function names should follow the Verb-Noun naming convention. To avoid any conflict, you can prefix the noun, e.g.
Function Get-MyReport { #reusable code }
If you create a function that redefines an existing command, the existing definition is overwritten in the current session. The code’s output will be returned to the caller, and you can output the result to the screen or assign it to a variable for further processing.
In addition to the function itself, you can also define parameters for the code contained in the function. The code within the function can then perform its task using information passed through the parameters, as the parameters become variables usable in the context of the function’s code.
An example explains the concept. The following is an imaginary function to fetch details of distribution groups and the number of members of a distribution group when the MemberCount parameter is specified. The MemberCount property contains the number of members or is ignored when the MemberCount switch is not specified.
A basic definition of such a function might be something like the following:
Function Get-DistributionGroupInfo( $Identity, $MemberCount) { Get-DistributionGroup $Identity | Select-Object Identity,PrimarySmtpAddress, @{n='MemberCount';e={ If( $MemberCount) { (Get-DistributionGroupMember -Identity $_ | Measure-Object).Count } }}} }
This code is acceptable when drafting a script or working on a proof of concept. However, issues might become apparent when using the function later or somebody else who is less familiar with the use case takes responsibility for the code.
Among the issues with this function are:
- The distribution group passed in the variable $Identity can be unspecified ($null). This can lead to unintended side effects, as many Get cmdlets happily return all objects when you specify $null as a value. Take the following code:
Function Process-Mailbox( $Id) { Get-Mailbox -Identity $Id | Set-Mailbox -HiddenFromAddressListEnabled $True }
Can you guess what happens if you do not pass the ID parameter or when the ID is empty? All mailboxes are returned and Set-Mailbox will happily hide all the mailboxes from address lists. It is probably not something you intended.
- In the example above, Identity and Members can be anything; they do not need to be a distribution group or switch, respectively. You might add code that checks if $Identity is a distribution group and if Members is a Boolean ($true or $false), but that would require additional code. The code might become quite complex if it must process multiple parameters.
Luckily, PowerShell has several mechanisms to assist you with defining parameter requirements. Let us look at the following example of an advanced function:
Function Get-DistributionGroupInfo { [CmdletBinding()] Param( [Parameter(Position= 0, Mandatory= $true, ValueFromPipeline= $true, ValueFromPipelineByPropertyName=$true, HelpMessage= 'Please provide a Distribution Group')] [String]$Identity, [Parameter(HelpMessage= 'Output member count')] [Switch]$MemberCount ) Process { Write-Verbose ('Fetching Distribution Group {0}' -f $Identity) Get-DistributionGroup $Identity | Select-Object Identity,PrimarySmtpAddress, @{n='MemberCount';e={ If( $MemberCount) { (Get-DistributionGroupMember -Identity $_ | Measure-Object).Count } }} } }
This advanced function contains the following enhancements:
- [CmdletBinding()] before the first parameter definition tells PowerShell that the function supports common parameters. Examples of common parameters are Verbose and Confirm. You can then include code in your function to support this. For example, if you pass -Verbose and your function contains Write-Verbose commands, verbose output will be displayed. When you omit -Verbose, the output is not displayed.
- Position=0 in the first parameter specification instructs PowerShell that the first unnamed parameter passed to Get-DistributionGroupInfo is treated as the Identity. So, the following two commands are the same:
Get-DistributionGroupInfo -Identity 'DG-X'
Get-DistributionGroupInfo 'DG-X'
Additional unnamed parameters can be specified as Position=1, etc.
- Mandatory=$true tells PowerShell that this parameter is mandatory. When a user omits the Identity when running the script, PowerShell will ask for it. Parameters can be optional by setting Mandatory to $false or omitting the condition.
- We want to be able to pass the Identity when the function is called in a pipeline. You can enable pipeline usage for this parameter by specifying ValueFromPipeline. You can use the property of passed objects by specifying ValueFromPipelineByPropertyName. Calling the function in a pipeline can then look like this:
Get-DistributionGroup -Identity 'DG-X' | Get-DistributionGroupInfo
- When you omit a mandatory parameter, HelpMessage defines help information. This help information is displayed when you enter !? when PowerShell asks you for input, and it is also displayed when you use:
Get-Help Get-DistributionGroupInfo -Full
I do not know anyone who uses !?, but you can if you want.
- After specifying the constraints, you define the parameter itself. You do this by giving it a name or, in this case, an identity. This name contains the value passed as a parameter when the function is called, making it available within its scope. You can optionally define the type of object the parameter will accept. In this case, we specify [String], which equals [System.String], but PowerShell has some type accelerators (short aliases) for built-in types.
The nice thing about strict typing of parameters is that when you pass a different type of object, such as an integer, PowerShell will throw an error, mentioning what was passed and what was expected. For example, the basic function below accepts a single parameter A, which needs to be an integer (int) type. Passing a number versus passing a string will result in the following output:
Function Test { param( [int]$A ) Write-Output ($A) } ❯ test -A 123 123 ❯ test -A 'string' Test: Cannot process argument transformation on parameter 'A'. Cannot convert value "string" to type "System.Int32". Error: "The input string 'string' was not in a correct format."
I recommend using strict typing with parameters whenever possible. Typing helps with troubleshooting usage and also helps to document the code, as explained later. One thing to consider is that values might get converted through interpretation. For example, if you pass a parameter value 123, and a string is expected, PowerShell will happily convert this to a string representation, ‘123’.
- The second parameter (note the comma after Identity) is a Switch named MemberCount. Since this parameter is not mandatory and pipeline usage is unnecessary, these are not specified in the definition. The nice thing with switches is that you can use them just by mentioning, e.g. –MemberCount ; will set the MemberCount variable to $true. When you do not, it will be $false. You cannot use -MemberCount $true, as PowerShell will interpret $true as the next parameter since MemberCount is a switch. If needed, for example when $true or $false is stored in a variable, you can set it using a variable by specifying <Switch>:<value>, e.g. -MemberCount:$false. You might already be using this syntax when avoiding having to confirm certain commands, e.g. Set-Mailbox … -Confirm:$False
- By putting the code performing the actual task in a Process script block {}, we make the function work for objects passed through the pipeline. If we omit this and leave the code as-is, it will not support pipelining, and the code will only execute once for the last object received through the pipeline. Note that the current object in the pipeline is available through the automatic variable $_, if needed, within the Process script block.
When we put the code for this function in a script file, for example, MyDemo.ps1, it becomes available within our PowerShell session. To accomplish this, we need to dot source it to define it in our session. We can then call it – provided the Exchange Online Management Shell is loaded and connected – and inspect its definition by calling Get-Help, which also includes documentation.
PS❯ . .\MyDemo.ps1 PS❯ Get-DistributionGroupInfo -Identity MyDG -MemberCount -Verbose VERBOSE: Fetching Distribution Group MyDG Identity PrimarySmtpAddress MemberCount -------- ------------------ ----------- MyDG MyDG@contoso.com 2 PS❯ Get-DistributionGroup | Get-DistributionGroupInfo -MemberCount Identity PrimarySmtpAddress MemberCount -------- ------------------ ----------- MyDG MyDG@contoso.com 2 OtherDG OtherDG@contoso.com 8 PS❯ Get-Help Get-DistributionGroupInfo -Full NAME Get-DistributionGroupInfo SYNTAX Get-DistributionGroupInfo [-Identity] <string> [-MemberCount] [<CommonParameters>] PARAMETERS -Identity <string> Please provide a Distribution Group. Required? True Position? 0 Accept pipeline input? true (ByValue) Parameter set name (All) Aliases None Dynamic? False Accept wildcard characters? False -MemberCount Output member count Required? False Position? Named Accept pipeline input? False Parameter set name (All) Aliases None Dynamic? False Accept wildcard characters? False <CommonParameters> This cmdlet supports the common parameters: Verbose, Debug, ErrorAction, ErrorVariable, WarningAction, WarningVariable, OutBuffer, PipelineVariable, and OutVariable. For more information, see about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
Begin, Process, End
The example function supports the passing of objects through the pipeline. The Process script block works on every item passed through the pipeline. But what if we want to perform some housekeeping before and after processing these objects? For example, we want to initialize some variables before processing all objects.
To do this, we can add a Begin and End script block before and after the Process block.
Begin { # Initialize $Items=0 } Process { # Do Something $Items++ } End { # Cleanup Write-Host ('We processed {0} object(s)' -f $Items) }
An example is when you want to count how many objects you have processed, as the number of objects passed through a pipeline is unknown upfront. Another example of this would be the Sort-Object command, which can only sort objects when all objects have been passed to it.
The script might have a pipeline function with a Begin script block to initialize the data set, Process to add all the items to it, and ultimately End performing the sort. Note that the Begin and End blocks are optional. Also, if no object is passed, the Process block is skipped, but Begin and End will be executed when defined.
Script Parameters
We discussed how parameters can be defined for a function, but how can this be achieved for a script? The answer is that the way to define parameters for a script is comparable to defining these for a function but happens at the script level instead. In practice, this means putting the definition at the very beginning of your script. For example, the following are the first few lines of your script:
[CmdletBinding()] Param( [parameter( Mandatory= $true) [ValidateScript({ Test-Path -Path $_ -PathType Leaf})] [String]$CSVFile, [ValidateSet(',', ';')] [string]$Delimiter=',', [System.Security.SecureString]$Password ) Function X { # … } #etc.
- Ask to provide a value for $CSVFile when one has not been given (Mandatory= $true). You will notice a ValidateScript line as part of the CSVFile parameter definition. When specifying parameters, you can have PowerShell perform certain validations against the values provided. Some of the possible tests are:
- ValidateScript is used to execute a script block that needs to result in $true for the parameter value to be accepted. In the example, we check if the filename is valid using Test-Path specifying the automatic variable $_ (the actual filename).
- ValidateSet to test the value against a set of predefined values. In the example, we use ValidateSet only to allow a comma or semi-colon for the $Delimiter parameter.
- Some of the other options are ValidateRange, ValidateCount, ValidatePattern, and ValidateLength a.o. More information on parameter validation can be found here.
- The delimiter parameter can be specified. If it is not specified (not mandatory), we set it to a default value of ‘,.’
- A $Password parameter can be provided. When specified, it needs to be of the type [System.Security.SecureString]. You are not limited to PowerShell’s built-in types; you can also use other (.NET) types, such as SecureString or a credential of the type [PSCredential].
Make PowerShell Work for You
It is ambitious to discuss functions and parameters in PowerShell in one article. I have not touched on other subjects, such as command sets and dynamic parameters. These might be topics for another article.
I hope that I have encouraged you to write reusable code, not only to leverage scripts and functions but also to correctly define code. Make PowerShell work for you when possible, letting it handle parameter validation and checking other constraints such as types. This enables you to focus on the task while making code less complex and improving readability.
If you have questions or comments, feel free to reach out in the comments. If not, wait until the next article, where I will discuss flow control.
Great article. I’d also suggest adding [ValidateNotNullOrEmpty()] or [ValidateNotNull()] to the Process-Mailbox function to prevent someone from passing a null value even after making it a required parameter resulting in all mailboxes being hidden. Taking it one step further, implement whatif and set ConfirmImpact in CmdletBinding so that a user is prompted to give the user the opportunity that they may be making an oops.
Thanks for the feedback. All valid points, but given the intended audience I’m not throwing everything on the table at once.
In the Script example, I referred to the article on Learn containing all the validation options.