Posts

Showing posts from January, 2015

Display server name or node in the cluster configuration

Image
In Cluster configuration if we want to know which cluster server we have accessed it simple. Liferay has provided a property by which we can know which cluster server we have accessed. Add the following property in the portal-ext.properties # # Set this to true to display the server name at the bottom of every page. # This is useful when testing clustering configurations so that you can know # which node you are accessing. # web.server.display.node=true If we see the internally liferay is using to get the computerName. PortalUtil.getComputerName(); Look into the portalImpl class of constructor how liferay is using to get server node name. _computerName = System.getProperty("env.COMPUTERNAME"); if (Validator.isNull(_computerName)) { _computerName = System.getProperty("env.HOST"); } if (Validator.isNull(_computerName)) { _computerName = System.getProperty("env.HOSTNAME"); } After restarting the server we can see the node

Database connection pool C3po configuration in liferay

In Liferay, default connection pool is c3p0 configured using portal-ext.properties. Liferay Portal also supports the database connection pool configuration through the Application Server. Liferay can access the Application Server level data source using JNDI. It is recommended to use the JNDI-based database connection pool configuration. Add the following property in the portal-ext.properties. jdbc.default.jndi.name=jdbc/LiferayPool Open the server.xml in the <liferay-home>/tomcat/conf folder Locate the Resource tag and add the following properties. <Resource auth="Container" description="Portal DB Connection" driverClass="com.mysql.jdbc.Driver" maxPoolSize="75" minPoolSize="10" acquireIncrement="5" name="jdbc/LiferayPool" user="<MySQL Database User Name>" password="<MySQL Password>" factory="org.apache.naming.factory.BeanFactory"

Quartz scheduler configuration in cluster

  Quartz is a very popular open source scheduler engine. Quartz scheduler stores data related to scheduled jobs in the Liferay database. In clustered environment there will be chance of starting the scheduler for all the nodes at same time which causes duplicate entries in the portal. We should  execute only once for all the nodes. Suppose if we have configured 4 nodes, then 4 scheduler will start at same time. But we need execute 1 scheduler for a job. Hence in a clustered environment, it is possible that multiple nodes start the same job at the same time. This can create havoc. To prevent this situation, we need to configure Quartz for the clustered environment. Add the following property in the portal-ext.properties org.quartz.jobStore.isClustered=true  Drop all the tables starting with QUARTZ_. This step is required if Liferay tables are already created. We just added a property to let the Quartz scheduler know that we are running multiple instances of the Q

File Utility methods in liferay

Liferay has many utility method to handle the file like copyDirectory , copyFile , unZip , delete , createTmpfile etc.. In real scenerio to unzip the file we need to write pure java code but liferay has utility method to unzip the file Have a look into the fileUtil class and look into the method unzip(File source, File destination) public static void unzip(File source, File destination) { PortalFilePermission.checkCopy(_getPath(source), _getPath(destination)); getFile().unzip(source, destination); } Also internally if we look into the how liferay is unzip, look into the ExpandTask class and look into the method as public static void expand(File source, File destination) { Expand expand = new Expand(); expand.setDest(destination); expand.setProject(AntUtil.getProject()); expand.setSrc(source); expand.execute(); } Their it is using the Expand class(org.apache.too