Monday, February 20, 2017

Import GitHub Enterprise into VMware vCenter 6.5

I wanted to try GitHub Enterprise 2.8.7 in my vCenter 6.5 env, but the OVF import always cancelled with an error message that the ProductInfo is not allowed in the envelope.

It seems the GitHub OVF template was created with a fairly old ovftool.

Fix:

1. Unpack the OVF (it's a ZIP file)
2. Edit the .ovf file and move the "ProductSection" XML Element to the <VirtualSystem> node. See this Gist.
3. Afterwards, re-compute the SHA1 fingerprint of the .ovf file and update the .md file with the new fingerprint
4 Re-package the files into a new .ovf zip archive.

After this change, it worked here to import he GH Enterprise OVF.


Wednesday, October 19, 2016

Grails 3 quartz-plugin with Clustering Support

If you need to run Quartz in Grails 3 on a clustered Application Server environment, you must change the default config so it is Cluster aware. Otherwise, each Job on each node runs independently.

1. Create the DB Tables for Quartz

This was quite hard and I needed to dig into the Quartz Library Source Code to get a Schema for Mysql with InnoDB (which had a typo..). I then created a migration file for the Grails database-migration plugin. 
Just copy this migration file into your grails-app/migration directory and register it in changelog.groovy


2. Configure database-migration plugin

Next, you need to tweak the database-migration config, so it ignores the Quartz tables. Otherwise,  it would drop the tables with the next dbm_gorm_diff run. Example for application.groovy:

grails.plugin.databasemigration.excludeObjects = ['QRTZ_BLOB_TRIGGERS','QRTZ_CALENDARS', 'QRTZ_CRON_TRIGGERS', 'QRTZ_FIRED_TRIGGERS', 'QRTZ_JOB_DETAILS', 'QRTZ_LOCKS', 'QRTZ_PAUSED_TRIGGER_GRPS', 'QRTZ_SCHEDULER_STATE', 'QRTZ_SIMPLE_TRIGGERS', 'QRTZ_SIMPROP_TRIGGERS', 'QRTZ_TRIGGERS']


3. Configure quartz-plugin


Next, you need to configure the Grails Quartz Plugin to use the jdbc store, and enable clustering.

4. Test clustering

Startup your application. You must see such message:

  Using job-store 'org.springframework.scheduling.quartz.LocalDataSourceJobStore' - which supports persistence. and is clustered.


Friday, October 14, 2016

Grails 3.x Spring Basic Authentication with JSON handling

If you need to secure a JSON Api using Basic Authentication via HTTPS, you need to tweak the Spring Security configuration and use custom beans to support JSON / HTML error responses.

If possible, use a more sophisticated authentication scheme for REST Apis, e.g. the spring-security-rest Grails plugin, which supports token based authentication (OAUTH like).

If you still need to support Basic Auth for your Grails Rest API (e.g. server-to-server communication), read on.

Goals

  1. Support Basic Auth only on the REST Api Urls, use default (web based) Authentication on all other Urls to be secured
  2. As the REST Api is stateless, no sessions should be created when accessing the Api
  3. If Authentication or Authorization errors occur, the authenticator should return JSON error blocks back if accessed with a json Content-Type, and HTML errors if the Api was accessed by a Browser (e.g. for debugging or documentation purposes)

Implementation Details


1. CustomBasicAuthenticationEntryPoint:


import groovy.transform.CompileStatic
import org.springframework.security.core.AuthenticationException
import org.springframework.security.web.authentication.www.BasicAuthenticationEntryPoint

import javax.servlet.ServletException
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse

/**
 * AuthenticationEntryPoint for BasicAuthentication.
 * Triggered if user is not (successfully) authenticated on a secured Basic Auth URL resource.
 * Maps all errors to 401 status code and returns a HTML or JSON error string dependent on the request content type.
 * Also, sends a Basic Auth Challenge header (if accessing via Browser for test purposes, to show the login popup)
 *
 * Author: Robert Oschwald
 * License: Apache 2.0
 *
 */
@CompileStatic
public class CustomBasicAuthenticationEntryPoint extends BasicAuthenticationEntryPoint {

  @Override
  public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException)
    throws IOException, ServletException {

    String errorMessage = authException.getMessage()
    int statusCode = HttpServletResponse.SC_UNAUTHORIZED

    response.addHeader("WWW-Authenticate", "Basic realm=\"${realmName}\"")

    if (request.contentType == "application/json") {
      log.warn("Basic Authentication failed (JSON): ${errorMessage}")
      response.setContentType("application/json")
      response.sendError(statusCode, "{error:${HttpServletResponse.SC_UNAUTHORIZED}, message:\"${errorMessage}\"")
      return
    }

    // non-json request
    response.sendError(statusCode, "$statusCode : $errorMessage")
  }

}

2. CustomBasicAuthenticationAccessDeniedHandlerImpl:


import groovy.transform.CompileStatic
import org.springframework.security.access.AccessDeniedException
import org.springframework.security.web.access.AccessDeniedHandlerImpl
import javax.servlet.ServletException
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse
/**
 * Basic Auth Extended implementation of 
 * {@link org.springframework.security.web.access.AccessDeniedHandlerImpl}.
 * Maps errors to a 403 status code and returns a HTML or JSON error string dependent on the request content type.
 * Author: Robert Oschwald
 * License: Apache 2.0
 */
@CompileStatic
class CustomBasicAuthenticationAccessDeniedHandlerImpl extends AccessDeniedHandlerImpl {

  @Override
  public void handle(HttpServletRequest request, HttpServletResponse response, AccessDeniedException accessDeniedException) throws IOException, ServletException {
    String errorMessage = accessDeniedException.getMessage()
    int statusCode = HttpServletResponse.SC_FORBIDDEN
    if (request.contentType == "application/json"){
      response.setContentType("application/json")
      response.sendError(statusCode, "{error:${statusCode}, message:\"${errorMessage}\"")
      return
    }
	// non-json request
    response.sendError(statusCode, "$statusCode : $errorMessage")
  }
}

3. grails-app/conf/spring/resources.groovy:


  // No Sessions for Basic Auth  
  statelessSecurityContextRepository(NullSecurityContextRepository) {}

  // No Sessions for Basic Auth
  customBasicRequestCache(NullRequestCache)
  
  statelessSecurityContextPersistenceFilter(SecurityContextPersistenceFilter, ref('statelessSecurityContextRepository')) {}

  statelessSecurityContextPersistenceFilterDeregistrationBean(FilterRegistrationBean){
    filter = ref('securityContextPersistenceFilter')
    // To prevent Spring Boot automatic filter bean registration in the ApplicationContext
    enabled = false
  }

  /**
   * Sends HTTP 401 error status code + HTML/JSON error in body dependent on the request type
   * if user is not authenticated, or if authentication failed.
   */
  customBasicAuthenticationEntryPoint(CustomBasicAuthenticationEntryPoint) {
    realmName = SpringSecurityUtils.securityConfig.basic.realmName
  }

  /**
  * Sends HTTP 403 error status code + HTML/JSON error in body dependent on the request type
  * if user is authenticated, but not authorized.
  */
  basicAccessDeniedHandler(CustomBasicAuthenticationAccessDeniedHandlerImpl)
  
  customBasicAuthenticationFilter(BasicAuthenticationFilter, ref('authenticationManager'), ref('customBasicAuthenticationEntryPoint')) {
    authenticationDetailsSource = ref('authenticationDetailsSource')
    rememberMeServices = ref('rememberMeServices')
    credentialsCharset = SpringSecurityUtils.securityConfig.basic.credentialsCharset // 'UTF-8'
  }

  /** 
  * basicExceptionTranslationFilter with customBasicRequestCache (no Sessions)
  * The bean name is used in Spring-Security by default.
  */
  basicExceptionTranslationFilter(ExceptionTranslationFilter, ref('basicAuthenticationEntryPoint'), ref('customBasicRequestCache')) {
    accessDeniedHandler = ref('basicAccessDeniedHandler')
    authenticationTrustResolver = ref('authenticationTrustResolver')
    throwableAnalyzer = ref('throwableAnalyzer')
  }

4. Configure the Spring Security Core plugin in grails-app/conf/application.groovy:


// Spring Security Core plugin
grails {
  plugin {
    springsecurity {
	  securityConfigType = "InterceptUrlMap" // if using the chainmap in application.groovy. If you prefer Annotations, omit.
	  auth.forceHttps = true
	  useBasicAuth = true // Used for /api/ calls. See chainMap.
	  basic.realmName = "App Authentication"
	  // enforce SSL
	  secureChannel.definition = [
	     [pattern:'/api', access:'REQUIRES_SECURE_CHANNEL'] // strongly recommended
		 // your other secureChannel settings
	  ]
	  filterChain.chainMap = [
        // For Basic Auth Chain:
        // - Use statelessSecurityContextPersistenceFilter instead of securityContextPersistenceFilter,
        // - no exceptionTranslationFilter
        // - no anonymousAuthenticationFilter
        // As springsec-core does not support (+) on JOINED_FILTERS yet, we must state the whole chain when adding our basic auth filters. See springsec-core #437.
        [pattern:'/api/**', filters: 'securityRequestHolderFilter,channelProcessingFilter,statelessSecurityContextPersistenceFilter,logoutFilter,authenticationProcessingFilter,customBasicAuthenticationFilter,securityContextHolderAwareRequestFilter,basicExceptionTranslationFilter,filterInvocationInterceptor'], // Use BasicAuth
        [pattern:'/**',filters:'JOINED_FILTERS,-statelessSecurityContextPersistenceFilter,-basicAuthenticationFilter,-basicExceptionTranslationFilter'] // normal auth
	  ]
	  interceptUrlMap = [
		[pattern:'/api/**', access:['ROLE_API_EXAMPLE']],
		[pattern:'/**', access:['ROLE_USER']]
	  }
	}
  }
}

5. UrlMappings definition

For the example above, you need to map your Api Controllers to /api/ in UrlMappings.groovy.





Thursday, October 6, 2016

Fortinet Route Based VPN with overlapping Networks

The other day I needed to establish an IPSEC VPN on a Fortinet 60D with Source NAT for an overlapping Subnet scenario. The remote subnet was the same as our local one.

I only found Policy Based examples in the Fortinet kb, so I tested it myself using a route based VPN.

The trick is to create an IP-Pool with the source NAT Subnet range, e.g. 192.168.99.0/24
This subnet is then presented to the remote IPSEC VPN (Proxy-ID) during IPSEC Phase 2 negotiation.

Whenever you access remote resources via the VPN, your local subnet IP (e.g. 192.168.1.2) is translated 1:1 into the IP-Pool subnet address (192.168.99.1) before entering the VPN.

1. create a IP Pool (Policy & Objects > IP Pools > Create New) with the following settings:
  • Type: Overload
  • Range: 192.168.99.0 - 192.168.99.255
  • ARP Reply: checked
2. Create your route based VPN (e.g. using the wizard). Type is "custom".
In Phase2:

  • Use your IP-Pool Subnet address (the source NAT translated one created in 1.)
  • Add all remote Subnets needed as Proxy-IDs. 
3. Add static routes for all remote subnets (Network > Static Routes):
  • Destination: Subnet
  • Subnet specification, e.g. 192.168.243.0/24
  • Device: <Tunnel Interface for the VPN>
  • Administative Distance: 10
4. Create Address Entries for local and remote subnets. If you use the VPN wizard, these entries are created automatically. If you configure the VPN manually or on the CLI, you must create address book entries on your own:
  • Create one entry for your local internal network, e.g: 192.168.1.0/24
  • Create entries for all remote subnets
5. Create a policy (Policy & Objects > IPv4 Policy > Create New:
  • Incoming Interface: internal
  • Outgoing Interface: <Tunnel Interface for the VPN>
  • Source: <Your local internal network Address entry created in 4.>
  • Destination Address: <remote network address definition(s) created in 4.>
  • Schedule: always
  • Service: ALL
  • Action: ACCEPT
  • NAT: enable
  • Fixed Port: disable
  • IP Pool Configuration: "Use Dynamic IP Pool". Select your Source-NAT IP Pool defined in 1.
  • Enable this policy: enabled
6. Test your communication to the remote subnet(s).


Friday, April 10, 2015

XCode 6.2 with IOS8.3 devices (Swift 1.1 / 1.2 problem)

If you need to debug Apps on an IOS 8.3 device, you must use XCode 6.3.

If you are in the situation that you have this very important Swift 1.1 based application to show your customer now, and not the time yet to migrate it to Swift 1.2, you must stick to XCode 6.2. But that does not work. You receive a "Device not eligible" error or "platform directory not found" error.

To debug / deploy your Swift 1.1 application to an IOS 8.3 device with XCode 6.2, there is a workaround.

1. Archive old XCode 6.2

In Finder, go to /Applications and archive Xcode.app. This is an important step, as we need to unpack it after the upgrade to XCode 6.3

2. Update XCode to 6.3

Upgrade XCode to 6.3 using the App Store application.

3. Rename XCode 6.3

After the upgrade, rename Xcode.app to Xcode6.3.app

4. Unpack XCode 6.2

Now unpack the zip file created in Step 1. Afterwards, you have 2 Xcode applications in /Applications, the old Xcode.app (6.2) and Xcode6.3.app

5. symlink IOS 6.3 Device Support into Xcode 6.2

Open Terminal.app and enter:

  cd /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/ 

 ln -s /Applications/Xcode6.3.app/Contents/Developer/Platforms/iPhoneOS.platform/DeviceSupport/8.3\ \(12F69\)/ 

 sudo chown -R root:wheel /Applications/Xcode.app

This sym-links the IOS 6.3 platform directory from Xcode 6.3 into Xcode 6.2.

6. Start Xcode 6.2 and run your app on an IOS 8.3 device 

Start /Applications/Xcode.app and try to run your application on an IOS 8.3 device. If you still receive the "Device not eligible" error, click on  Product > Destination > "Your Iphone" and try again.
It might be possible that you need to issue new provisioning profiles the first time you run the app on IOS 6.3.

7. select the command line tools

If you use Carthage, you may perform xcode-select to select the Xcode 6.2 build tools, otherwise your Carthage dependencies fail to compile. Do not forget to switch it back to 6.3 if needed.

#> sudo xcode-select -p   # print currently selected xcode commandline tools
#> sudo xcode-select -s /Applications/Xcode.app/Contents/Developer



Note:
For sure the best fix is to migrate your Swift 1.1 application to Swift 1.2 asap and work with XCode 6.3.


Friday, October 24, 2014

Auto-connect OSX IPSEC VPN on system boot / wakeup

If you have OSX 10.10 (Yosemite) or higher  installed and need to automatically (re-) connect a VPN connection on system boot or wakeup, read on.

For a headless remote OSX machine, I needed to setup automatic VPN connection so the remote device is always accessible via VPN.


1. create LaunchDaemon plist file
sudo vi /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


content:

<?xml version="1.0" encoding="UTF-8"?>  
 <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">  
 <plist version="1.0">  
  <!--  
    See http://roosbertl.blogspot.com  
    Auto-connect to named OSX VPN when network is reachable.   
    This LaunchDaemon monitors the state of the given VPN configuration.  
    If the VPN is disconnected, it pings an internet host, first (www.google.com)  
    Then it establishes the VPN again.  
    Note: using scutil to connect, as "networksetup" does not work on Yosemite to reconnect a VPN  
    Based on plist by patrix   
    http://apple.stackexchange.com/questions/42610/getting-vpn-to-auto-reconnect-on-connection-drop  
    Config:  
      1. Replace "VPN (Cisco IPSec)" below with your VPN connection name as configured in system prefs  
      2. Install this file in /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist (sudo)   
      3. Set permissions  
       sudo chown root:wheel /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
       sudo chmod 644 /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
      4. activate/update with:  
      sudo launchctl unload -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
      sudo launchctl load -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist   
   -->  
  <dict>  
   <key>Label</key>  
   <string>org.roosbertl.osxvpnautoconnect</string>  
   <key>ProgramArguments</key>  
   <array>  
    <string>bash</string>  
    <string>-c</string>  
    <string>(test $(networksetup -showpppoestatus "VPN (Cisco IPSec)") = 'disconnected' &amp;&amp; echo "Re-Connecting VPN (Cisco IPSec).." &amp;&amp; ping -o www.google.com &amp;&amp; scutil --nc start "VPN (Cisco IPSec)") ; sleep 10</string>  
   </array>  
   <key>RunAtLoad</key>  
   <true/>  
   <key>KeepAlive</key>  
   <true/>  
  </dict>  
 </plist>  

2. set  permissions

sudo chown root:wheel /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 
sudo chmod 644 /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


3. activate

sudo launchctl load -w /Library/LaunchDaemons/org.roosbertl.osxvpnautoconnect.plist 


Thursday, March 6, 2014

Oracle Jaxb Maven Artifact mess...

Today I wanted to upgrade jaxb-xjc from 2.1.5 to 2.1.16 and got the error

Could not find group:com.sun.xml.bind, module:jaxb-core, version:2.1.16.

After digging into mavenrepository.com, there wasn't a jaxb-core 2.1.16 available.
I first thought the usual Sun / Oracle "download our RI zip to get the artifacts" game.
Downloaded jaxb-ri-2_1_16.zip from https://jaxb.java.net/downloads/ri/ and unpacked it.

No jaxb-core.jar in the zip...

Then I found bug report https://java.net/jira/browse/JAXB-984

They messed up all the newer Jaxb 2.1.x version pom files. Bug seems to be partially resolved, only, as they closed it without fixing 2.1.16 (and some other versions).

Thats a "reference implementation" I like a lot...




Saturday, October 19, 2013

Grails database-migration-plugin: DB independent diff files

If you are using Grails database-migration-plugin and your application has to support MySQL as well as Oracle, you have 2 choices currently. As the underlying Liquibase library is currently unable to create real database-agnostic migration files when performing a diff, you can:

  • create 2 different sets of migration files, one for MySQL one for Oracle. Drawback of this is, that this is error prone and anything else than DRY.
  • Convert the created migration files automatically so they are real database agnostic.
Thanks to the Grails database-migration-plugin hooks (when using database-migration plugin version >= 1.3), we can do that automatically on initial start after creating a new migration file. Migration files are only migrated once, and migrated files will be marked with a special comment to indicate conversion.

In changelog.groovy, define all types you want to use for Oracle and MySQL (you can extend that to support other db types, easily):

databaseChangeLog = {
  
  /*
    DATABASE SPECIFIC TYPE PROPERTIES
   */
  property name: "text.type", value: "varchar(50)", dbms: "mysql"
  property name: "text.type", value: "varchar2(500)", dbms: "oracle"

  property name: "string.type", value: "varchar", dbms: "mysql"
  property name: "string.type", value: "varchar2", dbms: "oracle"

  property name: "boolean.type", value: "bit", dbms: "mysql"
  property name: "boolean.type", value: "number(1,0)", dbms: "oracle"

  property name: "int.type", value: "bigint", dbms: "mysql"
  property name: "int.type", value: "number(19,0)", dbms: "oracle"

  property name: "clob.type", value: "longtext", dbms: "mysql"
  property name: "clob.type", value: "clob", dbms: "oracle"

  property name: "blob.type", value: "longblob", dbms: "mysql"
  property name: "blob.type", value: "blob", dbms: "oracle"

  /* DATABASE SPECIFIC FEATURES */
  property name: "autoIncrement", value: "true", dbms: "mysql"
  property name: "autoIncrement", value: "false", dbms: "oracle"


  /* Database specific prerequisite patches */
  changeSet(author: "changelog", id: "ORACLE-PRE-1", dbms: "oracle") {
    createSequence(sequenceName: "hibernate_sequence")
  }

  /* Patch files */  
  include file: 'initial.groovy'

}

Then create a Callback Bean class for database-migration-plugin and register it in resources.groovy:

migrationCallbacks(DbmCallbacks)

Bean:

import liquibase.Liquibase
import liquibase.database.Database
import org.codehaus.groovy.grails.plugins.support.aware.GrailsApplicationAware;
import org.codehaus.groovy.grails.commons.GrailsApplication

class DbmCallbacks implements GrailsApplicationAware {
  private static final String MIGRATION_KEY = "AUTO_REWORKED_MIGRATION_KEY"
  private static final String MIGRATION_HEADER = "*/ ${MIGRATION_KEY} */"
  // DB-Specific types to liquibase properties mapping
  // see changelog.groovy for defined liquibase properties
  Map<String,String> liquibaseTypesMapping = [
          // start with specific ones, then unspecific ones.
          'type: "varchar(50)"': "type: '\\\${text.type}'",
          'type: "varchar2(500)"': "type: '\\\${text.type}'",
          'type: "varchar"': "type: '\\\${string.type}'",
          'type: "varchar2"': "type: '\\\${string.type}'",
          'type: "bit"': "type: '\\\${boolean.type}\'",
          'type: "number(1,0)"': "type: '\\\${boolean.type}'",
          'type: "bigint"': "type: '\\\${int.type}'",
          'type: "number(19,0)"': "type: '\\\${int.type}'",
          'type: "longtext"': "type: '\\\${clob.type}\'",
          'type: "clob"': "type: '\\\${clob.type}\'",
          'type: "longblob"': "type: '\\\${blob.type}\'",
          'type: "blob"': "type: '\\\${blob.type}\'",
          // regEx (e.g. "varchar(2)" to ${string.type}(2)'. Do not add trailing "'", here!
          '/.*(type: "varchar\\((.*)\\)").*/': "type: '\\\${string.type}",
          '/.*(type: "varchar2\\((.*)\\)").*/': "type: '\\\${string.type}",
          // db features
          'autoIncrement: "true"': "autoIncrement: '\\\${autoIncrement}'"
  ]

 void beforeStartMigration(Database database) {
   reworkMigrationFiles()
 }
 private void reworkMigrationFiles() {
    def config = grailsApplication.config.grails.plugin.databasemigration
    def changelogLocation = config.changelogLocation ?: 'grails-app/migrations'
    new File(changelogLocation)?.listFiles().each { File it ->
      List updateOnStartFileNames = config.updateOnStartFileNames
      if (updateOnStartFileNames?.contains(it.name)) {
        // do not convert updateOnStart files.
        return
      }
      convertMigrationFile(it)
    }
  }
 private void convertMigrationFile(File migrationFile) {
    def content = migrationFile.text
    if (content.contains(MIGRATION_KEY)) return
    liquibaseTypesMapping.each {
      String pattern = it.key
      String replace = it.value
      if (pattern.startsWith('/')) {
        // Handle regex pattern.
        def newContent = new StringBuffer()
        content.eachLine { String line ->
          def regEx = pattern[1..-2] // remove leading and trailing "/"
          def matcher = (line =~ regEx)
          if (matcher.matches() && matcher.groupCount() == 2) {
              String replaceFind = matcher[0][1] // this is the found string, e.g. 'type: "varchar(22)"'
              String replacement = "${replace}(${matcher[0][2]})\'"  // new string, e.g. "type: '${string.type}(22)' "
              line = line.replace(replaceFind, replacement)
          }
          newContent += "${line}\n"
        }
        content = newContent
      } else {
        // non-regEx, so replace all in one go.
        content = content.replaceAll(pattern, replace)
      }
    }
    // mark file as already migrated
    content = "${MIGRATION_HEADER} +"\n"+ content
    migrationFile.write(content, 'UTF-8')
    log.warn "*** Converted database migration file ${migrationFile.name} to be database independent"
  }


This for sure can be optimized (e.g. use only regEx definitions in the map and handle if no matcher groups are found, but it does it's job. 

Tested with MySQL and Oracle 11.0.2 XE.


Building 64bit TrueCrypt for OSX

Currently, TrueCrypt binaries are only available for PPC and i386 without any hardware accelleration.
Also, the available binaries are currently under suspect, as nobody knows if they were compiled from the official source code or if they were tampered by someone. (hick..).

A project tries to get funded to audit the TrueCrypt sources and binaries for any hidden backdoors: http://istruecryptauditedyet.com. The german C't magazine tried to rebuild the Windows binaries from the source code and found some suspect differences while comparing the binaries. See here [english translation] [original article in german].

To ensure at least you do not use tampered binaries, you can use this script to generate a 64bit OSX version from the TrueCrypt sources with hardware accellerated AES functions yourself. (Idea and patches see this Blog post).


#!/bin/sh
# Build TrueCrypt on OSX with 64bit and HW acc. AES
# 2013 http://roosbertl.blogspot.com
####
version=7.1a
md5="102d9652681db11c813610882332ae48"
sourcename="TrueCrypt ${version} Source.tar.gz"
####
download_filename="TrueCrypt%20${version}%20Source.tar.gz"
which /opt/local/bin/port &>/dev/null
if [ $? != 0 ]; then
echo "Port seems not to be installed."
echo "Please install www.macports.org, first" 
exit 1
fi
currDir=`pwd`
workDir="$0.$$"
echo "Creating TrueCrypt $version"
mkdir $workDir
trap "echo cleaning up; cd $currDir; rm -rf $workDir ; exit" SIGHUP SIGINT SIGTERM
echo "Getting required Ports.."
sudo port install wxWidgets-3.0 fuse4x nasm wget pkgconfig
sudo port select wxWidgets wxWidgets-3.0
echo " "
echo "Downloading $sourcename"
wget --quiet http://cyberside.planet.ee/truecrypt/$download_filename
echo "Checking md5.."
thisMd5=`openssl md5 < $sourcename | cut -d " " -f 2`
if [ ! "$md5" = "$thisMd5" ]; then
echo "MD5 checksum $thisMd5 does not match expected MD5 checksum $md5"
echo "Either the source file was modified or you tried to download a different version"
echo "FATAL ERROR. Aborting."
exit 1
else
echo "Checksum is ok."
fi
echo "Extracting '$sourcename'"
tar zxf "$sourcename"
cd truecrypt-${version}-source
echo "Getting Patch file.."
wget --quiet http://www.nerdenmeister.org/truecrypt-osx.patch
mkdir Pkcs11
cd Pkcs11
echo "Getting pkcs11 headers.."
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11.h
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11f.h
wget --quiet ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v2-20/pkcs11t.h
cd ..
echo "Patching TrueCrypt for 64bit and HW accellerated AES.."
patch -p0 < truecrypt-osx.patch
echo "Compiling..."
make -j4
echo "Compile done."
mv Main/TrueCrypt.app ..
echo "Cleanup.."
cd $currDir
rm -rf $0.$$
echo "Done creating TrueCrypt.app Version: $version"
# end





Wednesday, July 31, 2013

jMeter-Server on OSX

If you want to run a jmeter-server unattended on one or several OSX boxes, you can perform this:

1. create /Library/LaunchAgents/org.apache.jmeter.server.plist


#>sudo vi /Library/LaunchAgents/org.apache.jmeter.server.plist


<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>LimitLoadToSessionType</key>
<string>System</string>
<key>KeepAlive</key>
<true/>
<key>Label</key>
<string>org.apache.jmeter.server.plist</string>
<key>Program</key>
<string>/Applications/JMeter-2.9.app/Contents/Resources/bin/jmeter-server</string>
<key>WorkingDirectory</key>
<string>/var/log</string>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>

Program path is  the path to the jmeter-server script. In the example above, I bundled jmeter 2.2 with Jar Bundler into an OSX app, added all jmeter folders to Contents/Resources (bin, lib folders) so I start the jmeter-server from the bundle app on several remote OSX boxes.

2. Load the plist file in launchctl:



# sudo launctl load /Library/LaunchAgents/org.apache.jmeter.server.plist

This should immediately start the jmeter-server with working directory set to /var/log (to get the jmeter-server.log logged in the system log dir)

3. Register remote jmeter-servers in jMeter

To register the jmeter-server instances in your local jMeter program, edit bin/jmeter.properties and edit the property "remote_hosts". Add your remote jmeter-servers by comma-separating the IP adresses. Example:

remote_hosts=127.0.0.1,192.168.17.12

Thursday, June 13, 2013

Creating OSX FusionDrive with Recovery Partition

Today I received several 2010 model iMacs which were upgraded with an additional 3rd party SSD by an Apple reseller. The reseller created a FusionDrive by using the HDD and the SSD.

After reception, I recognized that no Recovery Partition was available. The whole disks were occupied by the CoreStorage volume.

Almost every instruction I found on the web for creating a FusionDrive was without also preserving a Recovery Partition. So I rebuilt the FusionDrive with a working Recovery Partition on my own to be as most compliant to the default stock Apple Fusion Drive configuration you get on a Mac with a preconfigured Fusion Drive. (Original Apple partitioning of a 27" late 2012 iMac Fusion Drive see end of this article).

Warning: This procedure is deleting your data from the disks! Use on own risk.
Note: Take a backup of all your data before proceeding, as all the data will be wiped. I take backups using TimeMachine and by using CarbonCopyCloner.

Prerequistes:

  • Install CarbonCopyCloner
  •  CarbonCopyCloner clone of your internal HDD to an external USB HDD (we will boot that later). If you receive a warning that no Recovery HD exists on the target USB drive, open the CarbonCopyCloner Windows > Disk Utility > Recovery HD > Select your USB drive and clone the Recovery Partition onto the USB drive.

Did I mention to take a TimeMachine Backup as well? Do that to be on the safe side.

After you made your backups, proceed with this steps:

1. Boot from your CarbonCopy USB clone by pressing the option (ALT) key during power up (or boot the recovery partition using CMD-R key after power on).
2. Start a Terminal
3. Check your current disk partitions and CoreStorage setup:

# sudo diskutil cs list 
CoreStorage logical volume groups (1 found)

|
+-- Logical Volume Group 78E316BB-911C-4456-9128-6925CDC3AE5F
    =========================================================
    Name:         FusionDrive
    Status:       Online
    Size:         1127552614400 B (1.1 TB)
    Free Space:   19023224832 B (19.0 GB)
    |
    +-< Physical Volume A3B20C13-4576-4FDF-A40D-F23BAA493C4C
    |   ----------------------------------------------------
    |   Index:    0
    |   Disk:     disk0s2
    |   Status:   Online
    |   Size:     127691702272 B (127.7 GB)
    |
    +-< Physical Volume 2E75ED2E-909F-44AF-A58E-57F94ABAD85C
    |   ----------------------------------------------------
    |   Index:    1
    |   Disk:     disk1s2
    |   Status:   Online
    |   Size:     999860912128 B (999.9 GB)
    |
    +-> Logical Volume Family FE296C1B-9152-42FE-8C6A-40DE18D747FA
        ----------------------------------------------------------
        Encryption Status:       Unlocked
        Encryption Type:         None
        Conversion Status:       NoConversion
        Conversion Direction:    -none-
        Has Encrypted Extents:   No
        Fully Secure:            No
        Passphrase Required:     No
        |
        +-> Logical Volume 2B4753CB-7D8C-4E57-BA81-C643AE84BF4F
            ---------------------------------------------------
            Disk:               disk2
            Status:             Online
            Size (Total):       1100000002048 B (1.1 TB)
            Size (Converted):   -none-
            Revertible:         No
            LV Name:            Macintosh HD
            Volume Name:        Macintosh HD
            Content Hint:       Apple_HFS


4. Note the UUID identifier of the Logical Volume group (marked in red)
5. Split up the existing FusionDrive CoreStorageVolume. If you do not have a CoreStorage volume set up, you can skip this step

# sudo diskutil cs delete <YOUR_UUID>, example:
# sudo diskutil cs delete 78E316BB-911C-4456-9128-6925CDC3AE5F

6. Format the internal HDD using Disk Utility
7. Start CarbonCopyCloner, then open Window > Hard Disk Management. Tab on "Recovery HD", select your internal HDD volume and click on the Create Recovery-HD partition button.
8. Now it is time to create your CoreStorage Logical Volume Group. But in contrast to many instructions on the net, we will not use the whole internal HDD, but only the free partition on the HDD! It is also important to state the SSD drive as the first disk to have optimum speed.
9. Check your current paritioning:

# diskutil list
/dev/disk0
   #:                       TYPE NAME        SIZE     IDENTIFIER
   0:      GUID_partition_scheme             *128.0GB disk0
   1:                        EFI             209.7 MB disk0s1
   2:                  Apple_HFS Untitled    127.7 GB disk0s2
/dev/disk1
   #:                       TYPE NAME        SIZE     IDENTIFIER
   0:      GUID_partition_scheme             *1.0 TB  disk1
   1:                        EFI             209.7 MB disk1s1
   2:                  Apple_HFS hdd         999.2 GB disk1s2
   3:                 Apple_Boot Recovery HD 784.2 MB disk1s3

/dev/disk3
   #:                       TYPE NAME        SIZE     IDENTIFIER
   0:                  Apple_HFS CarbonCopy  *998.7GB disk4

The partition disk1s2 is the free partition on the internal disk we will use for the FusionDrive.
The partition disk1s3 is the newly created Recovery Partition.

10. Create a new CoreStorage Volume. Disk0 in this example is the SSD drive.

# sudo diskutil cs create FusionDrive disk0 disk1s2
Password:
Started CoreStorage operation
Unmounting disk0
Repartitioning disk0
Unmounting disk
Creating the partition map
Rediscovering disk0
Adding disk0s2 to Logical Volume Group
Unmounting disk1s2
Touching partition type on disk1s2
Adding disk1s2 to Logical Volume Group
Creating Core Storage Logical Volume Group
Switching disk0s2 to Core Storage
Switching disk1s2 to Core Storage
Waiting for Logical Volume Group to appear
Discovered new Logical Volume Group "71377E10-7126-4E7B-A52D-F96F383F56BA"
Core Storage LVG UUID: 71377E10-7126-4E7B-A52D-F96F383F56BA
Finished CoreStorage operation

11. Now create the CoreStorage Logical Volume:
Note the LVG UUID printed by the command in step 10 (marked in red) and use that id:

diskutil cs createVolume <YOUR_LVG_UUID> jhfs+ "Macintosh HD" 100%

Example:
diskutil cs createVolume 71377E10-7126-4E7B-A52D-F96F383F56BA jhfs+ "Macintosh HD" 100%

Started CoreStorage operation
Waiting for Logical Volume to appear
Formatting file system for Logical Volume
Initialized /dev/rdisk6 as a 1 TB HFS Plus volume with a 90112k journal
Mounting disk
Core Storage LV UUID: 04085D2E-D630-4EED-BED4-B0EFDF6C7834
Core Storage disk: disk6
Finished CoreStorage operation


11a. Check the CoreStorage setup:

#diskutil cs list
CoreStorage logical volume groups (3 found)
|

+-- Logical Volume Group 71377E10-7126-4E7B-A52D-F96F383F56BA
    =========================================================
    Name:         FusionDrive
    Status:       Online
    Size:         1126902611968 B (1.1 TB)
    Free Space:   73728 B (73.7 KB)
    |
    +-< Physical Volume 4512A812-D998-4A45-AF5E-CB6F8EE4BD2D
    |   ----------------------------------------------------
    |   Index:    0
    |   Disk:     disk0s2
    |   Status:   Online
    |   Size:     127691702272 B (127.7 GB)
    |
    +-< Physical Volume 2344BA4E-2460-4631-B52A-BFFB6DBBA9C7
    |   ----------------------------------------------------
    |   Index:    1
    |   Disk:     disk1s2
    |   Status:   Online
    |   Size:     999210909696 B (999.2 GB)
    |
    +-> Logical Volume Family 6994CC0B-958C-4CF1-A4BF-7B7553478619
        ----------------------------------------------------------
        Encryption Status:       Unlocked
        Encryption Type:         None
        Conversion Status:       NoConversion
        Conversion Direction:    -none-
        Has Encrypted Extents:   No
        Fully Secure:            No
        Passphrase Required:     No
        |
        +-> Logical Volume 04085D2E-D630-4EED-BED4-B0EFDF6C7834
            ---------------------------------------------------
            Disk:               disk6
            Status:             Online
            Size (Total):       1118375247872 B (1.1 TB)
            Size (Converted):   -none-
            Revertible:         No
            LV Name:            Macintosh HD
            Volume Name:        Macintosh HD
            Content Hint:       Apple_HFS

12. Start CarbonCopyCloner and clone back the USB boot drive to your newly created FusionDrive (named "Macintosh HD" in step 11).
13. Reboot your system and try to boot the recovery partition by pressing the ALT key during reboot.
14. Reboot your system from the FusionDrive.
15. Enable Trim support, to keep your SSD speed high over time. You can patch the OSX driver yourself, or you use tools like http://www.groths.org/trim-enabler/ or Chameleon Trim Enabler.



For reference, here is the partition printout of a stock Apple 2012 iMac with original FusionDrive configuration:
# diskutil list
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *121.3 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:          Apple_CoreStorage                         121.0 GB   disk0s2
   3:                 Apple_Boot Boot OS X               134.2 MB   disk0s3
/dev/disk1
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk1
   1:                        EFI                         209.7 MB   disk1s1
   2:          Apple_CoreStorage                         999.3 GB   disk1s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk1s3
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS Macintosh HD           *1.1 TB     disk2