In response to a post that I had written before on how to install OpenDJ and OpenAM I had someone remind me that I never came back and wrote the follow on post (which I had promised to do). They posted the question to my other blog site (which I have since migrated over to this site). I am going to answer the question here as this is the only blog that I am presently maintaining.
Can you paste me the link which talks about next part of the installation.
1. Configure OpenAM to look to OpenDJ for users
2. Install a Web agent
3. Create an Access Policy to protect a web application.
I often say that I am successful by “standing on the shoulders of giants”. There is so much great “how to” content on the ForgeRock Community wiki site and that is where I turn to when I am looking for help or advice on OpenAM, OpenDJ and OpenIDM. Take a look at the section, “OpenAM: How do I?”. There are a number of “how-to” articles that are literally step by step with screen shots.
The link for the article that specifically talks about how to configure OpenAM to protect a web application can be found here: Add Authentication to a Website using OpenAM.
******** Shameless Plug Alert ************
To protect a single web site is pretty simple and straight forward but sometimes you have unusual or difficult use cases where you need another set of eyes or additional help architecting your solution. I have a lot of experience with the ForgeRock suite and I am available to provide consulting services. Please don’t hesitate to reach out if you are interested in contracting my services.
I have been installing the ForgeRock stack on Ubuntu a lot lately. One of the things that I noticed is that when configuring OpenAM and OpenDJ for automatic startup you need to let OpenDJ finish starting up before starting Tomcat (OpenAM) … otherwise OpenAM will not be able to find it’s configuration and assume that it’s a new install.
I added a timer to the start up script to make the script sleep for a minute before starting (YMMV) Mark Craig (from ForgeRock) clued me into a nice little bit of code “start on started opendj” essentially this tells the startup script to wait until opendj has started before starting Tomcat. Thanks Mark, that’s exactly what I was looking for.
cd [install dir]/opendj/bin
sudo ./create-rc-script -f /etc/init.d/opendj -u [user to run as]
sudo update-rc.d opendj defaults
OpenAM [on Tomcat]
sudo vi /etc/init.d/tomcat
Now paste the following content:
# Tomcat auto-start
# description: Auto-starts tomcat
# start tomcat with user: ubuntu
# pidfile: /var/run/tomcat.pid
case $1 in
start on started opendj
/bin/su ubuntu /opt/tomcat/bin/startup.sh
/bin/su ubuntu /opt/tomcat/bin/shutdown.sh
/bin/su ubuntu /opt/tomcat/bin/shutdown.sh
/bin/su ubuntu /opt/tomcat/bin/startup.sh
Make the script executable by running the chmod command:
sudo chmod 755 /etc/init.d/tomcat
The last step is linking this script to the startup folders with a symbolic link. Run the following two commands:.
sudo ln -s /etc/init.d/tomcat /etc/rc1.d/K99tomcat
sudo ln -s /etc/init.d/tomcat /etc/rc2.d/S99tomcat
Restart the system and tomcat will start automatically.
I am working with a client today who has Oracle Identity Federation (OIF) 11g configured with Oracle Access Manager (OAM) 10g as the default Authentication Engine. With this configuration the authentication module is dictated by the OAM policy configuration. If you set the OAM policy (the policy that protects the /fed/user/authnoam resource) to IWA then all federated SSO attempts will be routed to the IWA authn engine and if this policy is configure for a custom login form then all SSO attempts will be routed to the custom login form … I think you get the point. So, what happens when some resources (SaaS apps configured as SP/RP’s in OIF) require different levels of assurance (LOAs)? I thought maybe I could use the SAML default authentication method configured in the SP/RP metadata in the circle of trust (COT) but that does not get passed onto OAM. My second thought was to create a different policy for the URL that was being protected … but that OIF uses a pretty standard URL (/fed/user/authnoam?refid=id-blahblahblah) … OAM wouldn’t be able to figure out which policy to use.
So, had anyone else found a solution to this problem? I would appreciate any discussions or feedback.
So, this is not my “typical” IDM post but I wanted to save this for my own future reference.
Working from Mac OS X desktop and connecting to an EC2 (Redhat) instance over SSH. I am installing and configuring Symfony which requires (strongly desires) that you connect to the config.php script from localhost (127.0.0.1).
1.) Modify PHP script to comment out the localhost checks (boring)
2.) Create a SSH tunnel from Mac terminal to the web port on the EC2 instance
The first option is pretty obvious and requires basic skills. I am not sure what the ripple effects are with this so I’d prefer not to go this route.
The second option earns more “skillz” points and doesn’t require you to modify the config.php file, from Symfony. Note: Originally, I was using port 81 as the local port. I changed the local port to 1337 vs 81. Chris (see comments) made an excellent point that you don’t need to use sudo if your local port is higher than 1024.
1. Open Terminal Window from OS X desktop
2. Type: ssh -i mykey.pem -L 1337:am.acme.com:80 am.acme.com
So what did we do here:
ssh -i mykey.pem: connect to remote server using ssh with the key that you use to connect to Amazon instance (you do use keys right??)
-L 1337:am.acme.com:80: Local port (on OS X) will be 1337 and map that port to 80 on the EC2 instance URL am.acme.com
am.acme.com: this is the remote (EC2 instance) hostname
3. The first time you connect to this server you will be asked to add this host to your known hosts file (say yes)
4. Open a web browser (from OS X) and enter “127.0.0.1:1337/Symfony/web/config.php” to connect to the Symfony config on the EC2 instance
As long as you keep the SSH connection open then you can use the tunnel. To close the tunnel, just exit from the SSH session.
I just finished configuring Oracle Access Manager (OAM) for Common Access Card (CAC) authentication integrated with Axway’s Server Validator (SV)Plugin ( I will blog about this in another post ) for certificate validation. While discussing this with another engineer on the project he mentioned that this really opened the door for tightly integrating with a lot of their existing partners. I said that while this is great I would prefer to federate with these partners and not have to deal with managing the extra infrastructure components as well has having to manage several trusted certificates provided by the partners (with intermediate certificates there were about 6 just for this partner alone … I am trying to picture how that scales for each new partner). I freely admit that I am biased towards Federation. I am totally sold-out on the benefits of having the Identity Provider (IdP) take care of credentialing and authentication and the Service Provider (SP) can focus on the applications. His point in preferring to authenticate locally with CAC (vs via Federation) was that by doing so we somehow offer a better user experience. I think you can also make the argument that a particular, potential IdP maybe not have Federation capabilities (this won’t always be the case IMO). Do you think that you can achieve the same Level of Assurance (LoA) by using Federation instead of authenticating at the SP? (SAML, OpenID or OpenID Connect)
I’d like to crowd-source this discussion and see if we can put together some good arguments for/against either side. Please RT and comment if you have thoughts/opinions on this.
I had an interesting use case come up this morning and I am wondering if there are any “federation” products that can handle this use case. My client would like to configure the IDP to handle different sets of users (let’s call them “internal” and “external”). To avoid the external users from being redirected to the IDP directly it has been front-ended with a proxy (Apache HTTP) located in the DMZ. Internal users should have access to the same same SPs … but probably don’t want the internal users getting redirected to the proxy located in the DMZ. One of the products that I work with can only have one “server url” configured (that I know of) … do other products allow for multiple URL’s to be configured? Would love to hear if this is actually a “problem” and if so how other vendors have implemented. The easy solution on our part is to deploy another federation server (IDP) to handle the different users … personally I hate to keep telling the customer to deploy a new instance each time a new use case comes up. I don’t think that scales very well.
I just got this from my friends at OptimalIDM and wanted to share this news.
OptimalIDM is formally announcing their Virtual Identity Server for Office 365 via a press release at 9:00 a.m. this morning.
VIS for Office 365 adds a ton of features and support to Office 365 such as:
- · Users can exist anywhere (i.e. eDirectory)
- · Complete Multi-forest support (no on-premise synch required)
- · Non-routable UPN’s (domain.local) & multiple UPN suffixes support
- · Two-Factor authentication
- · Denial of Service prevention/Detection
- · Cloud Firewall (filter data going to cloud)
- · Detailed Audit logging
OptimalIDM is demonstrating this at a Lunch presentation on TUESDAY at TEC.
@billnelson gives us the most complete history of the Directory Services you will ever find (…until the next one) 🙂
The Most Complete History of Directory Services You Will Ever Find.
When ever I need to integrate Oracle Identity Federation (OIF) and Oracle Access Manager (OAM) it always takes me a few minutes to remember which integration approach provides which capability. I decide to make myself a cheat sheet to help remember. If you are ever in the same boat hopefully this will help.