New cloud and dev, the new new.

Looking at Docker

While having a quick look at Docker I happened across a slide show presenting Docker starting with its rapid take up.

A list:

  • Jenkins.  An extendable open source continuous integration server
  • Travis. (From Wikipedia) In software development, Travis CI is a hosted, distributed[2] continuous integration service used to build and test projects hosted at GitHub.
  • Chef. Chef models IT infrastructure and application delivery as code, giving you the power and flexibility to achieve awesomeness.
  • Puppet. Puppet Open Source is a flexible, customizable framework available under the Apache 2.0 license designed to help system administrators automate the many repetitive tasks they regularly perform.
  • Vagrant. Create and configure lightweight, reproducible, and portable development environments.
  • OpenStack. Open source software for building private and public clouds.

But what is this all about?  I’m thinking out loud about about transitioning from classic LAMP in a box applications to elastic applications built admin interface first with functions as web services for responsive apps using the likes of Node.js and Create.js

Posted in ITMS, Virtual Machines | Tagged , | 2 Comments

Updating WordPress to 3.7.1 and then some of its 80+ plugins

Updating WordPress to 3.7.1 and then some of its 80+ plugins

I need to update The Commons.  We have been at 3.5.2 for far too long.  With our CELT team, we have decided to update to the security update 3.7.1 because it is a security update but no further.  This may have the side effect of breaking some plugins.  We will see what we can live with and what fixes, replacements and compromises we have to make on long the way.

Upgrade to 3.7.1

Ho hum, as we are not go up to 3.8.1 which would be as simple as clicking on update I have to manually update using the distributed code.  I followed the instructions.  Before embarking on this journey I had a look for something that would tell me, for our WMPU install, which plugins are activated network wide and which one are activated on individual sites within the network.  To do this I used ‘WPMU Plugin Stats‘.  I printed this to paper and to PDF so I can tick things off and make notes.

Before doing the update it is important to deactivate all plugins and to run wp-admin/update.php and update the network before enabling them again according to the record I have made.

Here goes, the re-activate…

  • External Group Blogs : bp-groups-externalblogs.php on line 308, bad prepare statement

Updating the plugins…went much better than usual.  This gave me the time to look at our missing LDAP Options page.  This was fixed by following the instructions for WPMU Ldap Authentication.  And to tidy up some tables that were not created when we were having server problems.  To fix these I looked for errors in the error logs complaining about not being able to write to tables.  These errors would have the affected blog, a number, as a substring e.g. wp_133_visitor_maps_st.  This script:

#!/bin/bash

mysql -uroot -p ourblog <<HERE
CREATE TABLE \`wp_$1_visitor_maps_wo\` (
  \`session_id\` varchar(128) NOT NULL DEFAULT '',
  \`ip_address\` varchar(20) NOT NULL DEFAULT '',
  \`user_id\` bigint(20) unsigned NOT NULL DEFAULT '0',
  \`name\` varchar(64) NOT NULL DEFAULT '',
  \`nickname\` varchar(20) DEFAULT NULL,
  \`country_name\` varchar(50) DEFAULT NULL,
  \`country_code\` char(2) DEFAULT NULL,
  \`city_name\` varchar(50) DEFAULT NULL,
  \`state_name\` varchar(50) DEFAULT NULL,
  \`state_code\` char(2) DEFAULT NULL,
  \`latitude\` decimal(10,4) DEFAULT '0.0000',
  \`longitude\` decimal(10,4) DEFAULT '0.0000',
  \`last_page_url\` text NOT NULL,
  \`http_referer\` varchar(255) DEFAULT NULL,
  \`user_agent\` varchar(255) NOT NULL DEFAULT '',
  \`hostname\` varchar(255) DEFAULT NULL,
  \`provider\` varchar(255) DEFAULT NULL,
  \`time_entry\` int(10) unsigned NOT NULL DEFAULT '0',
  \`time_last_click\` int(10) unsigned NOT NULL DEFAULT '0',
  \`num_visits\` int(10) unsigned NOT NULL DEFAULT '0',
  PRIMARY KEY (\`session_id\`),
  KEY \`nickname_time_last_click\` (\`nickname\`,\`time_last_click\`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

CREATE TABLE \`wp_$1_visitor_maps_st\` (
  \`type\` varchar(14) NOT NULL DEFAULT '',
  \`count\` mediumint(8) NOT NULL DEFAULT '0',
  \`time\` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  PRIMARY KEY (\`type\`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

HERE

This will represent a big improvement in the service.  Now to look at some blogs to see if the updates have worked…

 

 

 

 

Posted in CELT, ITMS, Library, The Commons | Tagged , , , , | Leave a comment

Hardening Apache using OpenVAS and RedHat advisories

Hardening Apache using OpenVAS and RedHat advisories

My institution uses a tool provided by Janet to scan for vulnerabilities in web/servers.  We fix problems as soon as we see them.  I have recently been looking at Apache on an up to date CentOS server.  In order to test my changes I installed the FREE OpenVAS tool.  The install is very straight forward and once I set up the firewall on a test server I could start scanning hosts.

The report was more verbose than the “complaint” report I was looking at.  I understand that tools like this can not always tell if the flaw actually exists but instead takes clue emitted from the server e.g. openssh 2.2-v5.  That example, gives out the version of the software for which a flaw may exist but does not know, in this case, that the server is already patched.  In the report, a Common Vulnerabilities and Exposures code is given for each “flaw”.  I looked these up to assess the threat taking RedHat at their word.

When RedHat explains that a CVE is already patched or that it does not apply because of the use of the machine I can override the test in the scan providing a cleaner report next time.

In this specific case, I was looking at the strength of SSL from one of our servers.  Due to OpenVAS, I was lead to look at SSL compression, the tokens Apache emits and TRACE/TRACK methods too.

A big thumbs up for OpenVAS and RedHat’s CVE database.

Posted in ITMS, Linux SysAdmin | Tagged , , | Leave a comment

Protected: CentOS virtual machine template to support LAMP and other applications. Part 2

This content is password protected. To view it please enter your password below:

Posted in Uncategorized | Enter your password to view comments.

CentOS virtual machine template to support LAMP and other applications. Part 1(Updated 2016)

Preparing an install of CentOS to become a VMWare template.

Since being centralised and virtual machine infrastructure being some what new to ITMS, familiar to some, we are homing in on single solutions for systems administration that should provide “wins” in terms of short-order provision of machines for services/applications, development, testing and research/student environments.

We have virtual machine environments in several forms and some colleagues including myself have been preparing for the great day when we converge our infrastructure in to two very reliable data centres.  This week my team, of developers, were discussing the old days of installing Windows (not me…) using 20 floppy disks with the occasional bad sector.  A week of that would constitute work and was common practice.  In the GNU/Linux world, and probably Windows which is losing ground in the server room, dev ops are constantly  struggling to get away from that and to move to instant provision of machines ready to run services.  It is a problem of scale.  At DMU, at last count there was around 800 servers.  Re-provisioning those installing one operating system at a time will take a long time.  With some preparation, now, around agreed practices we can speed up the move to the new infrastructure.

Why am I looking at this now?  There is an instant need to create a server with a LAMP stack on it for DMU Global.  That ties in with a need for a WordPress (LAMP) install and a requirement for an LDAP server to supports the Library’s OpenAthens LA service.  There are requirements in common and tonnes of choices about the best approach.  The requirements are not simply related to the common software components but relate to disk use/partitioning and security of the servers.  There is also the opportunity to create a template that can be used by other systems administrators in ITMS.  I hope to reduce the amount of work that needs to be done post install so that colleagues can get on with the meat of setting up applications.

Some decisions:

When I managed the web team in the library we insisted on secure keys, passphrases and encrypted sessions: command line and file transfer.  This, pretty much, is a unique practice in the university but passwords are being proven to be a weak authentication and I think that secure keys are the way to go.  There is some flexibility in how SSH can be set up.  It is possible to allow passwords to be used by a restricted set of IP addresses.  This is something that we should discuss.  Passwords need to be changed if someone leaves the business.  It might be easier to revoke a secure key.

For fifteen odd years I have separated out custom software, the application (web server) and data from the OS.  This has the advantage of being able to update the operating system or even change it without touching those local aspects.  A disadvantage is that on the data side those changes need to be reflected.  Another systems administrator would have to know about the changes or have my skills and experience to ‘devine’ how the machine is set up, additionally it is harder for those changes to survive an upgrade.  Of course, another advantage is that hackers and root kits can not rely on the usual assumptions of a default install.  Part 2 of this series will detail those changes but the blog entry will be internal to DMU only.

GNU/Linux is faster when it is paravirtualised.  This is when code is shared between the guest virtual machine and the host.  Simplistically, more of the CPU is used in the traditional sense giving an efficiency that virtualising via hardware can not achieve.  Every solution has its own software for doing this.  The software is installed in the guest.  In our preparation for the Great Convergence we have used KVM, Virtual Box and very recently oVirt.  VMware has its own solution too.  We had hoped to move our virtual machines from their original home to the new VMware infrastructure by exporting and importing but if this paravirtualisation software exists on the machine it may confuse things in the VMware environment.  A LAMP stack will work with out the software so I am leaving it out of the install for the template.  It could be added later if, for example, access to a USB dongle is needed.  We are wedded to VMware for a time.  Hopefully, something like oVirt or OpenStack will be considered later on.  The guest tools will help bits spin faster but the administration overhead of moving between environments might something we want to avoid.  The choice for Windows might be different.

Back to security.  Some favour the use of sudo and mulitple accounts with passwords.  I have a feeling I will not win this one.  My small team was very used to being themselves locally but logging in as root to remote machines.  All machines had a small number of accounts and only root had a password.  This is unusual and might have added to our success in that it is not the natural assumption.  Hackers/root kits look for accounts with simple passwords.  We made it impossible to login via any of those accounts.    Of course, we only have to manage the root account on those machines.  Choices like this, local firewall configuration and secure key/passphrases have probably saved us a mountain of trouble as well as increasing the up time of our services.

In our requirements gathering, working with HP, it became obvious that one of our practices will change.  Before virtualisation we had virtual servers in the Apache web server.  Either one IP address and multiple CNAME aliases were used using HTTP 1.1 or multiple IP addresses were used in order to home several web applications with their own sub/domain names.  This, while it feels like a package one administrator would be comfortable with, multiple administrators or new colleagues would have to use their skills to understand the install with any problems that might bring.  Virtual machine infrastructure technology makes for very light virtual machines and running more of them, separated services, make administration easier.  One meaningful sub/domain, one machine and one configuration.  Separating out services allows us to organise services more easily including housing within hosts and in backing them up.  If we think about hybrid cloud solutions and bursting it makes sense simplify virtual machines.  We used to have a dedicated IP address for the machine and separate IP addresses for web servers on a machine.  We will assume, for now, that we will have one IP address for the virtual machine and the service running on it.

There is a choice to be made about disk usage or consumption.  PostgreSQL is a better database manage system than MySQL.  MySQL is very popular and some software using LAMP only works with MySQL.  I could leave the choice the next administrator or make both available and rely on the storage solution to take care of duplicated blocks of data across multiple virtual machines.  If the next administrator knows that that the software is installed she can configure the machine to use the installed software and skip the download.  I have been using PostgreSQL and MySQL for many years.  The greatest advantage is that there are some default tunings that can be made to both DBMSs which will be consistent across all virtual machines if I make the changes for the template.  An alternative to this is a wide tree of templates starting with  the minimum install linked to many templates: a complete LAMP stack with MySQL, a complete LAMP stack with PostgreSQL, just PostgreSQL, just MySQL, Perl instead of PHP etc.

Backups… I am going to use Amanda for now because I know I can support it for disaster recovery.  I’m sure it will get swapped out later but I do not know how it would be provisioned in the mean time.  Amanda is free and does not need a licence; quicker and cheaper.

On file system encryption.  I’m am not encrypting anything now.  This setup will allow the data partition to be encrypted.  If we want to encrypt the operating system partition then we need to separate out /boot from /.

Time.  We are using NTP as a belt and braces approach to keeping machines in sync.  VMware could within its infrastructure guarantee the time but if we teleport a machine to another infrastructure or burst it to a cloud we can not guarantee the time is the same in the new environment.

We are using rsyslog to record interesting changes to files locally.  I have also set up the virtual machine template to share changes logged to a remote server.  Because this is done for the template every virtual machine created from the template will automatically report to the remote server.  That server will be used to generate reports, warnings and help in any compromises should they occur.

I have created a script that can be run before the template virtual machine is shutdown.  This cleans up:

/bin/rm -f /etc/ssh/*key*
/bin/rm -f /var/log/*-???????? /var/log/*.gz
/bin/rm ~www/p*/logs/*-????????
/bin/cat /dev/null > /var/log/audit/audit.log
/bin/cat /dev/null > /var/log/wtmp
/bin/rm -f /etc/udev/rules.d/70*
/bin/sed -i '/^\(HWADDR\|UUID\)=/d' /etc/sysconfig/network-scripts/ifcfg-eth0
/bin/rm -rf /tmp/*
/bin/rm -rf /var/tmp/*
/bin/rm -f ~root/.bash_history

and before the machine is shutdown we should run ‘unset HISTFILE’ to prevent the current sessions history being saved.

Notes:

  • Need email for the root user (machine and web server) to go somewhere
  • Who should own responsibility for backup of e.g. SQL and disk space?
  • We send syslog events to a remote syslog server
  • Need to adjust logrotate for non-default logs
  • Add webalizer for web statistics later on?

In doing the work I will list parts of the web that have influenced the design:

A year later (update for 2016)

We now have some experience of working with VMware at scale.  We have gained experience in how resources are used and some interesting things have come up.  Backups are interesting.  We don’t have no implemented a solution yet (licensed or free) that will snap a MySQL database and the filesystem so that we have a consistent backup.  We, therefore, spit the SQL out nightly while the application is in maintenance mode and have that backed up by our backup solution.  Some of our services are getting big!  DMU Commons has grown by 50% in the last term.  The previous size represented five years of the service.  Backing up the VM takes, relatively, a long time.  Most VMs are 50GB where The Commons is 200GB.  We want to move the users’ content to a central store and mount it by NFS.  This gives us quicker backups and the ability to easily tune the volume size.  But, WordPress has the application and the data under the same directory.  We need to engineer the disk layout to support WordPress as best we can.  That is it need to make sense to the next tech who is asked to look at it.  We are looking at:

  • /, /tmp, /boot, swap on one volume group, disk, controller
  • /dbms on one  volume group, disk, controller to support MySQL and PostgreSQL
  • /usr1 on one volume group, disk, controller for application/data
  • /usr2, possibly, in case the service grows to a size that user data should be moved.

We are currently looking to implement SAP, SAP recommend separate disk controllers for performance reasons.

The service is being heavily used both by creators and consumers.  We host a web analytics service on the same VM.  Running reports uses lots of RAM and CPU.  We see a need therefore to move the stats service away from the VM.  This is another reason why services should be split one per VM.

 

Posted in Linux SysAdmin | Tagged , , , | Leave a comment

DORA Regulatory Work Part 2b : Embargo changes

Part 1 is over here. Part 2a is over here.

Embargo in DSpace

To us it seems that the embargo code in DSpace is still being thought about.  It has changed between its first introduction in 1.6.x to 3.1.  We think we understand it and are using it in ‘Simple’ mode.  In ‘Simple’ mode, during the submission, we can add an embargo to a bitstream.  This actually assigns the rights to for an Anonymous/guest to be able to read a bitstream after a start date with no end.

Conversation on the dspace-tech mailing list discusses what embargo metadata should and should not be exposed to programmers because if exposed it would also be available to BadPeople.

Once we got the embargo functions working we discovered that a guest user must click to view a bitstream before they find out that it is restricted.  The message reads:

The file you are attempting to access is a restricted file and requires credentials to view. Please login below to access the file.

We wanted to tell the user that the file is embargoed and that the embargo will be lifted on a certain date before they click on viewI thought that the outcome of that would be that the guest to the website might not come back ala ‘this website is under construction’.  I wanted to add the possibility to create a calendar ICS file on the fly so that the guest can have a reminder of the embargo being lifted.

So, how to do that?  I discovered that the DRI /metadata/handle/xxxx/ZZZZ/mets.xml?rightsMDTypes=METSRIGHTS does not include the start date of the rights restriction.  Also, the code xsl:call-template name=”display-rights” in item-view.xsl does not handle guest views in a way that would convey the embargo to them.  To fix this, for DMU, I added this code:

--- a/dspace-api/src/main/java/org/dspace/content/crosswalk/METSRightsCrosswalk.java
+++ b/dspace-api/src/main/java/org/dspace/content/crosswalk/METSRightsCrosswalk.java
@@ -248,7 +248,11 @@ public class METSRightsCrosswalk
            //Translate the DSpace ResourcePolicy into a <Permissions> element
            Element rightsPerm = translatePermissions(policy);
            rightsContext.addContent(rightsPerm);
-           
+
+           Element datesMD = new Element("Dates", METSRights_NS);
+           datesMD.setAttribute("START_DATE", String.valueOf(policy.getStartDate()));
+           datesMD.setAttribute("END_DATE", String.valueOf(policy.getEndDate()));
+           rightsContext.addContent(datesMD);
         }//end for each policy
 
         context.complete();

thus exposing rights:Dates/@START_DATE.  I was able to then display an embargo message based on the start date and user group from the metadata above.  I only display the embargo for this situation.  For any other situation the original display-rights code is run.

Now I have the message and a start date I could work on creating a utility to generate an ICS file containing the handle of the item, the date and an alarm.  Working out how to do this was tricky.  This is how I did it…

Modify the sitemap.xmap for our theme:

--- a/dspace/modules/xmlui/src/main/webapp/themes/dmu2011/sitemap.xmap
+++ b/dspace/modules/xmlui/src/main/webapp/themes/dmu2011/sitemap.xmap
@@ -15,7 +15,6 @@
     </map:components>
 
     <map:pipelines>
-
                <!--
                        Define global theme variables that are used later in this
                        sitemap. Two variables are typically defined here, the theme's
@@ -29,14 +28,28 @@
                                <theme-path>dmu2011</theme-path>
                                <theme-name>dmu2011</theme-name>
                        </global-variables>
-        </map:component-configurations>
-
+                </map:component-configurations>
+
+                <!-- Owen -->
+                <!-- ICS Calendar -->
+                <map:pipeline>
+                  <map:match pattern="utils/handle/*/*/calendar.ics">
+                    <map:generate src="xml/utils/ICSCalendar.xml">
+                    </map:generate>
+                    <map:transform src="lib/xsl/utils/ICSCalendar.xsl">
+                      <!-- map:parameter name="use-request-parameters" value="true"/ -->
+                      <map:parameter name="startDate" value="{request-param:startDate}"/>
+                      <map:parameter name="handle"    value="{1}/{2}"/>
+                    </map:transform>
+                    <map:serialize type="text" mime-type="text/calendar"/>
+                  </map:match>
+                </map:pipeline>
+                <!-- /Owen -->
 
                <map:pipeline>
                        <!-- Allow the browser to cache static content for an hour -->
                        <map:parameter name="expires" value="access plus 1 hours"/>
 
-
             <!-- handle static js and css -->
             <map:match pattern="themes/*/**.js">
                     <map:read type="ConcatenationReader" src="{2}.js">

This creates a pipeline that generates a text file with the correct mime-type ‘text/calendar‘ when URLS similar to:

/utils/handle/2086/0987/calendar.ics?startDate=2013-09-28

following the VCALENDAR v2.0 specification.

The dmu2011/lib/xsl/utils/ICSCalendar.xsl code looks like this:

<xsl:stylesheet xmlns:i18n="http://apache.org/cocoon/i18n/2.1"
        xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"
        xmlns:confman="org.dspace.core.ConfigurationManager"
        xmlns:util="org.dspace.app.xmlui.utils.XSLUtils"
        xmlns:jstring="java.lang.String"
        xmlns:exdate="http://exslt.org/dates-and-times"
        exclude-result-prefixes="util exdate i18n xsl confman jstring">

  <xsl:param name="startDate" select="$startDate"/>
  <xsl:param name="handle"    select="$handle"/>

  <!-- xsl:value-of select="exdate:date-time()"/ -->

  <xsl:variable name="isoDateStart" select="concat(jstring:replaceAll($startDate, '-', ''), 'T100000')"/>
  <xsl:variable name="isoDateEnd"   select="concat(jstring:replaceAll($startDate, '-', ''), 'T103000')"/>

  <xsl:template match="*">BEGIN:VCALENDAR
VERSION:2.0
METHOD:PUBLISH
PRODID:-//PYVOBJECT//NONSGML Version 1//EN
BEGIN:VEVENT
UID:<xsl:value-of select="$handle"/>
DTSTART;TZID=Europe/London:<xsl:value-of select="$isoDateStart"/>
DTEND;TZID=Europe/London:<xsl:value-of select="$isoDateEnd"/>
DESCRIPTION:http://hdl.handle.net/<xsl:value-of select="$handle"/>
DTSTAMP:<xsl:value-of select="$isoDateStart"/>
LOCATION:http://www.dora.dmu.ac.uk/ or http://www.dmu.ac.uk/
SEQUENCE:1
SUMMARY:De Montfort University Research Archive mailto:dora@dmu.ac.uk
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:Embargo has expired
TRIGGER:-PT15M
END:VALARM
END:VEVENT
END:VCALENDAR
</xsl:template>

</xsl:stylesheet>

There are two things to note about this.  The simple one is that it includes an alarm.  That is a design decision that may change.  The second is that I have deliberately left out what the item is.  The guest will have to visit the website to be reminded.  It is likely that this will change.  At the moment the code works.  Titles and descriptions can be long and might contain characters that might break the VCALENDAR format.

As per usual I need to tidy up the code adding the correct internationalisation.  Messages will change to when we have settled on what we thing is the correct language.

Posted in DORA, DSpace, Library | Tagged , | Leave a comment

DORA Regulatory Work Part 2a : Mandatory fields

Part 1 is over here.

We added two new fields to our metadata:

  • dc.funder Body funding the work
  • dc.projectid Identification for the work

These are added to DSpace/dspace/config/input-forms.xml.

       <field>
         <dc-schema>dc</dc-schema>
         <dc-element>funder</dc-element>
         <dc-qualifier></dc-qualifier>
         <repeatable>true</repeatable>
         <label>Funder</label>
         <input-type>onebox</input-type>
         <hint>Enter the name of the funder.</hint>
         <required>true</required>
       </field>

       <field>
         <dc-schema>dc</dc-schema>
         <dc-element>projectid</dc-element>
         <dc-qualifier></dc-qualifier>
         <repeatable>true</repeatable>
         <label>Project Identification</label>
         <input-type>onebox</input-type>
         <hint>Enter project identification the box below.</hint>
         <required>true</required>
       </field>

on page 2 of the submission form.

I still need to add the drop down for usual funders.

Posted in DORA, DSpace, Library | Tagged , | Leave a comment

DORA Regulatory Work Part 1 : Installing DSpace 3.1 with DMU EXPLORER

A list:

  • Install CentOS 6 to a VM and update to 6.4
  • yum -y install rsync wget git ant
  • (configure ssh to refuse passwords)
  • yum -y groupinstall “Web Server”  “PostgreSQL Database server” “PostgreSQL Database client”
  • Install latest Java JDK
  • Set up web server as per local convention
  • Download latest Tomcat
  • Git DSpace to get 3.1
    • git clone https://github.com/DSpace/DSpace.git
    • cd DSpace/
    • git reset –hard tags/dspace-3.1
  • Locally (dmu)
    • git clone git@dodger.blue.dmu.ac.uk:/usr1/home/git/projects/DSpace.git
    • git checkout –track origin/develop
    • Follow DORA_README
  • someone else
    • extract the EXPLORER code to dspace/modules/xmlui/src/main/webapp/themes
  • Install a copy of the production PostgreSQL database
  • edit local mvn.properties file
  • mvn -Denv=dora31 package, etc
  • install Tomcat Connectors (mod_jk)
  • configure Apache
  • configure DSpace to use dmu2011 theme
  • create your indexes or searches will fail
  • remember the look and feel/images are De Montfort’s.
Posted in Uncategorized | 2 Comments

Of bot armies, brute force, admin and WordPress

Hello,

I put in some work this weekend looking at WordPress and reports around the bot army attacking WordPress installations.  Our WordPress install is a complicated beast and that in itself, largely, may protect us from the more simple attacks.  I knew about the possible threat late Friday but family commitments prevented me from looking earlier than 5.30pm on Sunday.  In theory, I shouldn’t be working then and should be spending time with my family but this is important.  Later, we’ll have practices in place to limit the amount of time we spend working on emergencies out of hours.

Initially, I panicked.  I thought I should shut off access to the service.  There was no one to talk to about the issues of leaving the service up or taking it down.  With my partner away for the weekend I didn’t have time to do something rash.  If I switched off the service I would be doing the hackers work.  If I didn’t I risked having to re-install the service.

So, on Sunday I started reading around the problem.  The first few blogs I read around the problem talked about third party pay-for WordPress security modules.   This blog lists some things we can do to protect the service.  Cloud Flare are, effectively, selling a service around WordPress security.  That lead me to look at mod_security for Apache which allows us to ‘patch’ an application without changing the application’s code.  Web application code changes take time.  Then I came across  blogs, e.g. codex.wordpresss.org that explains the current attacks.  The account this attack goes after is ‘admin’.  Big phew!  When WordPress is installed it lets us change the the admin user to anything at all.  Good practice.

With that known and the pressure off I decided to look at black listing functionality for WordPress.  I happened across Better WP Security which has a lot of features and 5 stars from 1300 WP admins.  It looks good but not the sort of thing you can install on a Sunday on a WordPress install as complicated as ours without going through the university’s change advisory board CAB.

With the idea of keeping things simple I started looking at the access_log.  I could see repeated attempts by some IPs to URLs including wp-login.php, /register/, wp-admin, site.  I checked those IPs against Google to see if any security web sites reported them as suspicious and active.  What I didn’t want to do was blacklist IPs from genuine users.  I added this to the root .htaccess:

<FilesMatch ".*$">
order allow,deny
allow from all
deny from x.x.x.x
deny from y.y.y.y
deny from z.z.z.z
</FilesMatch>

This code tails the access_log looking for those URLs:

tail  -f access_log|egrep '404|/register/|wp-login'

today an email popped up talking about an attempt to create a new site using wp-signup.  We have site registration switched off.  This could be important in protecting our service.

This discovery lead me to change my script to rank accesses and discover the major bots knocking on the door:

 cat access_log|egrep '404|/register/|wp-login|site|wp-signup'| \
   awk '{ ips[$1]++ }END{for(i in ips){print ips[i] " " i;}}'| \
   sort -n

One bot has had 52696 attempts.  That this went unnoticed says something about how we work with WordPress (and other web applications.) Popular software will be targeted and we need generic tools to discover the attacks.

Well, that’s it.  I hope it helps.

Posted in CELT, Library, Linux SysAdmin, The Commons, The Commons | Tagged , , , | 1 Comment

First install of Project.net on Linux

From Project.net :

Project.net for Mission Critical Project Portfolio Management

Project.net is a complete Project Portfolio Management (PPM) solution designed to capture, display, report on, and resolve the complex interrelationships organizations tackle when planning and executing major initiatives.

I’m going to publish this regularly so that interested parties can see the progress.

I have been given a VM running CentOS 6.3 with 60GB of disk, 4GB of RAM and 1 CPU.  Of course, as the service gets taken up this can be scaled up.  I’m following the document for installing on Linux.  The following is a list of steps taken including the gotchas:

  • yum update
  • yum -y install rsync
  • setup server for SSH without passwords
  • yum -y groupinstall “Web Server”
  • mkdir -p /usr1/home
  • useradd -d /usr1/home/www -c ‘Web Server’ www
  • chown -R www:www /var/lib/dav /var/cache/mod_proxy /var/cache/mod_ssl
  • All links to Oracle XE 10g lead to 11g.  This could be a problem.  An admin on a forum states that 11g will work.
  •  yum -y install libaio-devel bc<—- !!!!
  • unzip the oracle archive and rpm -ivh it
  • make sure /etc/hosts has you server defined in it
  • /etc/init.d/oracle-xe configure
    • Oracle Application Express likes port 8080 (so does Tomcat, hmmm.)
    • HTTP Port 8081
    • Listener Port 1521
    • Haz database!
  •  Pull Project.net archive to /usr/local/src
  • mkdir /usr/local/projectnet
  • unzip archive to /usr/local/projectnet
  • find /usr/local/project.net/database/ -type f -exec dos2unix {} \;
  • edit …9.2.0/new/pnetMasterDBBuild.sh to reflect install
  • “If you are using Oracle Express set the PNET_BUILD_DB_DATABASE_NAME variable to the value XE
  • cd /usr/local/project.net/database/create-scripts/versions/9.2.0/new/
  • run the script, sip something nice, sip something nice…
  • tail -f /tmp/pnet_test_db_build.log the script pauses waiting for input.  The script doesn’t prompt for answers or supply default values.  I was checking the install log when I noticed it stop and ask a question.  I hit return in running script window.  Fingers crossed. (10pm)  Just checking the log now (9.25am) There are errors such as:
  • install java jre 6.0.x rpm
  • alternatives –install /usr/bin/java java /usr/java/jre1.6.0_37/bin/java 6037
  • pull jce_policy-6.zip to /usr/local/src
  • cp jce/*.jar /usr/java/jre1.6.0_37/lib/security/
  • pull apache-tomcat-6.0.35.tar.gz to /usr/local/src
  • pull apache-activemq-5.7.0-bin.tar.gz to /usr/local/src
  • useradd -d /usr1/home/projectnet -c ‘Project Net’ projectnet
  • su – projectnet
  • tar xf /usr/local/src/apache-tomcat-6.0.35.tar.gz
  • edit .bashrc to reflect CATALINA_HOME and JAVA_HOME
  • edit ./apache-tomcat-6.0.35/conf/tomcat-users.xml change passwords
  •  cp /usr/local/project.net/lib/mail.jar /usr/local/project.net/lib/activation.jar ~/apache-tomcat-6.0.35/lib/
  • cp /usr/local/project.net/lib/jdbc/ojdbc14.jar ~/apache-tomcat-6.0.35/lib/
  • mkdir ~/apache-tomcat-6.0.35/endorsed
  • cp /usr/local/project.net/lib/endorsed/* ~/apache-tomcat-6.0.35/endorsed/
  • edit apache-tomcat-6.0.35/conf/server.xml and change port 8080 to 9090
  • edit catalina.sh to reflect production system with -Xms256m -Xmx1024m
  • logging : using Log4j 1.2.9 and commons-logging-1.1.1-bin.tar.gz
  • create /etc/init.d/tomcat
  • chkconfig tomcat on
  • yum -y install apr-devel openssl-devel ant
  • pull jdk-6u37-linux-x64-rpm.bin to /usr/local/src/
  • alternatives –install /usr/bin/javac javac /usr/java/jdk1.6.0_37/bin/javac 6038
  • alternatives –config javac
  • build APR in /usr1/home/projectnet/apache-tomcat-6.0.35/bin/tomcat-native-1.1.22-src/jni
  • ant and ant jar in /usr1/home/projectnet/apache-tomcat-6.0.35/bin/tomcat-native-1.1.22-src/jni
  • Adjust JAVA_OPTS to reflect -Djava.library.path=/usr/local/apr/lib/
  • edit bin/linux-x86-64/activemq to reflect ActiveMQ home
  • edit bin/linux-x86-64/wrapper.conf to reflect ActiveMQ home in set.default.ACTIVEMQ_HOME and set.default.ACTIVEMQ_BASE
  • ln -s /usr1/home/projectnet/apache-activemq-5.6.0/bin/linux-x86-64/activemq /etc/init.d/activemq (as root)
  • chkconfig –add activemq
  • service activemq start (check data/wrapper.log)
  • For project.net edit conf/context.xml to connect to Oracle and a mail server.
  • As projectnet : cp /usr1/local/project.net/app/pnet.war ~/apache-tomcat-6.0.35/webapps/
  • mv ROOT ../ROOT.webapp
  • mv pnet.war ROOT.war
  • make sure passwords are correct in webapps/ROOT/META-INF/context.xml ./conf/Catalina/localhost/ROOT.xml conf/context.xml
  • (as root) /etc/init.d/tomcat restart
  • Haz Project.net application!
  •  Prepare Apache to be the port 80 frontend
  • create /etc/httpd/conf.d/pm4s.conf :
  • # tomcat integration
    ProxyPreserveHost On
    ProxyPass / ajp://localhost:8009/ min=5 ttl=120 keepalive=On ping=1
    ProxyPassReverse / ajp://localhost:8009/
  •  Needs SSL set up.  Done but needs proper SSL cert.
  • Configure Project.net
  • Change password and some other details
  • Setup up docvault to be in ~projectnet/docvault
  • Setup up Sys.Settings to reflect /usr1/home/projectnet install
  • Additional:
  • keystore (for LDAP cert) for java needs creating and tomcat needs to run with keystore arguments.
  • Installed licence key.  Every user must be given the key before registration
  • Redirect http to https.
Posted in Library, Linux SysAdmin | 1 Comment