With my ISCSI Target configured on FreeNAS and my Solaris 11 Global Zone installed, it’s time to configure the ISCSI initiator to discover the ISCSI target using the second NIC in my Solaris 11 host (or “Global Zone”).
In my lab environment, I have created one big volume called “ONEBIGVOLUME” on my FreeNAS, consisting of 4 x 7500 RPM SATA Disks. Within this single volume, I have created 5 x 250GB ZVols from which I’ve then created 5 x iSCSI device extents for my Solaris 11 host to discover. I’ll then create a single ZPool on my Solaris host, using these 5 iSCSI extents on FreeNAS as if they were local disks.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
First I need to configure the 2nd NIC that I intend to use for iSCSI traffic on my network. I’ll refer to my own post here to assist me in configuring that 2nd NIC.
The screen shot below shows the process end-to-end.
Image may be NSFW.
Clik here to view.
The oracle document here describes the process of enabling iSCSI.
I noticed that the subnet mask was incorrect on my 2nd NIC. My fault for not specifying it, the OS assumed a 8 bit instead of a 24 bit mask for my 10.0.0.0 network. I’ve included the steps taken to fix that below.
Note the commands highlighted below, that were not accepted by the OS and how I ultimately fixed it below.
Image may be NSFW.
Clik here to view.
Enable iSCSI Initiator
svcadm enable network/iscsi/initiator
Image may be NSFW.
Clik here to view.
From my FreeNAS, Services, iSCSI section, I can see that my base name is…
Image may be NSFW.
Clik here to view.
…and my target is called…
Image may be NSFW.
Clik here to view.
Dynamic Discovery
Here, I use dynamic discovery to find all disks on the FreeNAS iSCSI target, using just the IP Address.
This is probably the simplest way of discovering the disks, but also dangerous as there may be another disk amongst the list that is being used by another system (in my case, I have a VMWare DataStore too).
iscsiadm add discovery-address 10.0.0.50
iscsiadm modify discovery –sendtargets enable
devfsadm -i iscsi
format
Image may be NSFW.
Clik here to view.
It is far from easy to correlate which of these “solaris disks” pertain to which “iscsi extents” on FreeNAS. The only give away as to which one is my VMWare DataStore is the size, shown below…
Image may be NSFW.
Clik here to view.
So, I definitely do not want to use this disk on the Solaris system as it’s already in use elsewhere by VMWare here. This is why it’s a good idea to use static discovery and/or authentication!
Image may be NSFW.
Clik here to view.
On my Solaris host, I can go back and remove the FreeNas discovery address and start over using Static Discovery instead.
Static Discovery
I know the IP Address, port, base name and target name of my FreeNAS where my iSCSI extents are waiting to be discovered so I may as well use static discovery.
As I’ve already used dynamic discovery, I first need to list the discovery methods, disable Send Targets (dynamic discovery) and enable Static (static discovery)
It’s a bad idea to use both static discovery and dynamic discovery simultaneously.
iscsiadm remove discovery-address 10.0.0.50
iscsiadm modify discovery -t disable (Disables Send Targets)
iscsiadm modify discovery -s enable (Enables Static)
iscsiadm list discovery (Lists discovery methods)
Image may be NSFW.
Clik here to view.
With static discovery set, I can now re-add the discovery address, not forgetting the port (like I just did, above).
iscsiadm add discovery-address 10.0.0.50:3260
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
You can see now, that by using Static discovery to only discover extents available at the “iqn.2005-10.org.freenas.ctl:solariszp1” target at 10.0.0.50 on port 3260, my Solaris 11 host has only discovered the 5 devices (extents) I have in mind for my ZPool, and the VMWare DataStore has not been discovered.
The format command is a convenient way to list the device names for your “disks” but you don’t need to use format to do anything else to them. So CTRL-C to exit format.
Create ZPool
I can use my notes here to help with configuring ZPools and ZFS.
Since my FreeNAS uses ZFS itself to turn 4 x Physical 2TB SATA disks into it’s 7TB “ONEBIGVOLUME” that is subsequently carved up into a 1TB VMWare DataStore and my 5 x 250GB Solaris 11 ZPool1 volumes, the RAIDZ resilience to physical drive failure is set at the NAS level, and need not be used when configuring the ZPool from the 5 iSCSI extents. I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host with just one disk.
I could have created a single 1TB iSCSI extent and created my ZPool on the Solaris host from just the one “disk”, since the RAIDZ resilience to physical disk failure exists on the FreeNAS. By creating 5, at least I have the option of creating my ZPool with RAIDZ on the Solaris host in my lab also.
Image may be NSFW.
Clik here to view.
zpool create ZPOOL1 <device1> <device2> <device3> <device4><device5>
Here you can see the system warning about the lack of RAIDZ redundancy in my new pool. If the disks were physical, it’d be a risk but in my lab environment, it’s not a problem.
Image may be NSFW.
Clik here to view.
Although FreeNAS defaults to compression being turned on when you create a new volume in a pool, I created each of my 5 volumes used as iscsi extents here with compression disabled. This is because I intend to use the compression and deduplication options when creating the ZFS file systems that will be hosting my Solaris Zones on my Solaris 11 host instead.
I have a separate post here on Administering Solaris 11 Zones with the requisite commands but will post screenshots here from my own lab.
This is really where the post ends within the context of connecting Solaris 11 to iSCSI storage.
Create ZFS mount point for Zones
Image may be NSFW.
Clik here to view.
Create/Configure Zone1
Image may be NSFW.
Clik here to view.
Create system configuration for Zone1
Image may be NSFW.
Clik here to view.
Install the Zone1
Image may be NSFW.
Clik here to view.
Boot Zone1
Image may be NSFW.
Clik here to view.
Ping Zone1
Image may be NSFW.
Clik here to view.
Log into Zone1
SSH From Linux Workstation
Image may be NSFW.
Clik here to view.
ZLOGIN from Solaris Global Zone
Image may be NSFW.
Clik here to view.
So that’s the process end-to-end of discovering iSCSI SAN storage through logging into your new Solaris11 Zone.
Image may be NSFW.
Clik here to view.

Clik here to view.

Clik here to view.

Clik here to view.

Clik here to view.

Clik here to view.

Clik here to view.

The post Configure Solaris 11 ISCSI Initiator appeared first on Cyberfella Ltd.