Note: I am not sure how useful is this post. This post has remained private for more than a year. I am turning it public on Prasad’s request. Please read the disclaimer before reading the post.
Ultimately after four and a half months I got an opportunity to implement the Single Repository Multiple Content Server model. We had to shift our Documentum setup to a new server and I realized that it’s the perfect time to play around with the existing server. Multiple Content Servers can be implemented on the same host as well as on different hosts. The first two installation models discussed here were implemented on 6.5SP1 Windows SQL Server platform whereas the last one was implemented on 6.5SP2 Linux Oracle platform.
Multiple Content Servers on the same Server Host:
This case was much simple. We just need to create a new Server Configuration object, a new copy of server.ini with small changes and with a new name, some changes in Services File located at C:\WINDOWS\system32\drivers\etc and some entry in Windows Registry. The steps are described in the Content Server Installation Guide. The procedure was short and simple. Once the configurations are complete we get an additional Docbase Service in the Windows Services. The Docbase will be available if anyone of the two or both of these Docbase Services are running. The implementation should indeed increase the maximum possible number of concurrent sessions for a repository.
Multiple Content Servers running on different Hosts:
a) Content File Storage
In order to create Multiple Content Server running on different server machines for a single repository the first step is to create a Single Content Server – Single Repository installation on the first host. On the second host machine install the Content Server but don’t create any Docbase. Run the cfsConfigurationProgram.exe located at C:\Documentum\product\6.5\install. The program prompts for a Docbroker and displays the list of Docbase on authentication. The whole procedure is described in Content Server Installation Guide and Distributed Configuration Guide. While implementing this setup we ran into an error which read something like “Error – cannot find source: dbpasswd.txt Please read error log C:\Documentum\product\6.5\install\dmadmin.ServerConfigurator.log for more information“. We found a solution to this on geekweda.
The bug surfaces if you specify a domain name in one of the previous screens. To make your way around this bug, click “OK” on the error message to come back to the “Service Name” screen. Go back 4 screens by clicking “Back” button 4 times to come to the Repository and username selection screen and delete the Repository Super User Domain. Click next and continue with installation.
That solution was of immense help and that was the only issue we encountered in the installation process. Once the installation is complete a Docbase and a Docbroker service will be available on the remote server. Both Docbrokers are provided in the dfc.properties of the web application and each ACS Server is projected to the other docbroker as well. The Content File Storage configuration has a distributed filestore.
b) High-Availability Configuration
In order to achieve the HA Configuration the Database and the Filestore has to be shared across both the Primary and Secondary Content Server. The Installation of the Primary Content Server is same as any Single Content Server installation. While installing the Secondary Content Server, the same Database and the shared Filestore should be selected. Install the Docbroker. At this point instead of creating a new Docbase we will have to copy few files from the Primary CS. These files would include the server.ini, dbpasswd.txt, aek.key, dm_start_docbase, dm_stop_docbase etc. We will need minor changes in server.ini and dm_stop_docbase. Also a new Server Configuration Object is created. Both the Docbrokers are projected to each other and the dfc.properties of the application should mention both. I guess I have summed it up all. I could not find the process well documented in the CS Instalattion guide. CS Administration guide though provides some insight.
Now the Big question! Which one to choose from Content File Storage and High-Availability configurations?
Points to keep in mind: Content File Storage configuration offers a distributed Filestore whereas the High-Availability configuration needs a shared Filestore. I have also heard that the installation of an Index Server with the Content File Storage configuration limits the Full-Text Search Functionality. But the source of that information, as we all know can not be trusted. What I am sure of is that the Installation of an Index Server with High-Availability CS configuration works fine for both the Primary and Secondary CS. The High-Availability CS Installation with a High-Availability Index Server Installation is well supported.
*To be updated
Hi Utkarsh,
Thanks for making it available to every one. It is no need to be 100 % perfect as long as someone can understand a bit is fine…
Thanks again.
Prasad
Hi Utkarsh,
Great to see the info on the Content Server. Hope you have played a lot on Content Server.
I have a query regarding content server implementation.
I have a requirement to implement 3 content servers in 3 different locations which will be accessed by different countries across the world.
1) I want to see the data/documents stored in one content server should be replicated to the remaining 2 content servers and vice versa with the remaining 2 servers, which means the data should be available in all 3 content servers.
2) If in case one content server is failed/stopped the users who were accessing this failed/stopped content server should be able to access either of the remaining 2 contenet servers and should be able to access/update the documents, if this is possible how?
3) Need to have 3 databases+content servers, then how a user assigned to particular content server would be able to access the data/document from the other content server?
It would be great if you can respond to it at the earliest.
Thanks
Vijay
Hi Vijay,
1) When you say 3 Content Servers, do you really need 3 different repositories? A Single repository with 3 Content Servers can be the simplest solution where the Content and Metadata are available at a single location and therefore there is no need to replicate them. The Content and metadata can be accessed through 3 different CS from three different locations.
2) You can have High Availability / Failover in the above situation. Each Content Server can be projected to the other two and the Web application points to all the three Content Servers. When a CS fails, the request is automatically diverted to one of the other CS.
3) 3 databases (repositories) + 3 Content Servers would mean a dedicated Content Server for each repository. I hope when you say that, you don’t want the same content to be available on all the three repositories. If a user needs to access the content in all the 3 repositories, first he should be present in all the three repositories. You may have to sync all your users against a single source or you may like to have a federation depending upon whether you need the same set of users, groups and ACLs in all the three repositories. You will also need a Trusted Authentication between these repositories.
Hope this helps.
Thanks,
Uttkarsh
Hi Utkarsh,
Thanks for the swift response.
The requirement here is there are total 10 locations around the world from which these content servers will be accessed. In Main Locations where the usage is high we want to deploy 3 content servers so that the users can access the content server / repository locally, while the users in remaining 7 locations are spread across the world.
We want a file collaboration / replication strategy in such a way that the data updated in one location/content server should be updated in the remaining 2 servers, if in case a content server fails at one location so those users assigned to this server should be able to access the data/update in any one of the remaining 2 servers and make sure once the failed server comes up the users assigned to that server should be able to fetch the latest data from it.
Can we have a solution for this, if so how we can achieve this?
Some of the forums say that MaxDB support is until 2013, do you know that by 2013 is SAP going to release a new version or going to stop the development on it?
Regards
Vijay Ganga
Hi Vijay,
I think you can go for a High-Availability Content Server Installation at the 3 primary locations. You can use BOCS servers at the 7 remote locations. There will still be only one repository (database + file store). That would be the simplest solution. Alternatively you can also go for distributed File Store using CFS Configuration.
Do check the Content Server Installation Guide for details.
I would not be of much help in case of MaxDB.
Thanks,
Uttkarsh
Hi Uttkarsh,
It is great to see the someone sharing about documentum on the web.
I’m new to documentum and I want to learn alot.
As for single repo and multiple content servers,
I face the same problem like yours, which is
“Error – cannot find source: dbpasswd.txt Please read error log C:\Documentum\product\6.5\install\dmadmin.ServerConfigurator.log for more information“
is there another solution that you have tried before beside the one from geekveda? I have no luck on geekveda solution.
thanks,
Andy
Hi Andy,
I believe it’s basically a bug with the installer. Make sure you are doing exactly the same as suggested on geekveda. The solution is basically a workaround for the bug. You can also try to deploy the latest patch provided by EMC.
But make sure that there are no connectivity issues between the Primary and Secondary CS before concluding that you are confronted by a BUG in the installer.
The installer is basically trying to copy the dbpasswd.txt from Primary CS to Secondary. As a last step, you may like to copy the file manually and see how it goes from there.
-Uttkarsh