Sunday, May 15, 2016

Macbook Pro 2015 - Wifi "No Hardware Installed"

I received this error today and took quite some time to find the correct solution. Most articles advise resetting NVRAM and SMC but that made no difference for me.

What did make a difference is running the Apple Diagnostics tool. Not sure why, but running diagnostics must have reset something, and it got me going again. So if you have this issue with your Macbook, shut down the machine (apple icon, shut down) and then when powering back on, press and hold the "D" key.

 That will start the diagnostics utility. Let it do it's thing and it will pop a report out at the end. For me, it complained that I didn't have the power connected but no problems otherwise. When I restarted, wifi was back as if nothing bad had happened.

YMMV but it's definitely worth a try before taking it into the Apple store. See this post on the Apple forum for more information.

Saturday, December 6, 2014

Synology NAS - How to make a program run at startup

The other day I created a little node.js project to keep track of some finances. Synology has a node.js package but that just installs the tools - it has no 'container' or any other support to drop files and have it run automagically. Maybe one day.

In the meantime, you can start your project when you SSH into the NAS. My project has a 'www' script which bootstraps my project, so to start I simply type 'node bin/www' from the project directory. But, it only runs while I'm logged in, and if I log out for any reason, the process dies. That's hardly useful when I'm away from home, or on a different PC. So I decided to have a look at starting my project as a Linux service.

After doing a lot of research into how Synology does services, and a few failed attempts at init scripts, I found that Synology DSM (since version 5 perhaps) bundles Upstart, which is a neat little tool to deal with services on Linux. It's most prevalent on Debian and derivatives (notably Ubuntu). So, here's how I got my node.js application running on startup by using Upstart.

Step 1. Create an Upstart script.

Upstart scripts live in /etc/init by default, and that's also the place they live on your Synology NAS. You name the script 'servicename.conf', where 'servicename' is whatever you want it to be called. I called mine 'foobar' because I'm inventive like that, so the file is /etc/init/foobar.conf.

You can be as simple or as comprehensive as you like. I started by using a very simple script, like the one below.

Step 2. Start the service manually

The best part about keeping it simple is that you are more likely to get it running. If there is an error in your script, it won't start and it won't tell you why. It will just say the service could not be found.

To start the script, just type start foobar from the terminal. If it's happy, you'll see the process start and the PID displayed on the console. To stop it again, type 'stop foobar'.

Step 3. Check the script will auto-start

If you pass step 2 OK, then this is just a formality. The script will start when you restart your box. When ready type 'shutdown -r now' and allow your NAS to reboot. When it comes up again, you will be able to see that your service is running by hitting the URL, or by checking the logs which, by default, go to /var/log/upstart - all sysout from your process will go here.

Wednesday, January 4, 2012

How to update Samsung F4EG's dodgy firmware

Today I joined the NAS club and purchased a Synology DS212j. Inside I threw in my existing Samsung HD204UI 2Tb hard drive, and upon starting the NAS I saw a warning about a firmware update required on the drive. Apparently there is a flaw in the drive's firmware where disk writes can become corrupted when the write cache is enabled. Data corruption is nasty stuff, so it's an important update.

The NAS unit cannot update firmware, so I had to take the drive out again and reinstall into my PC.

The first issue is that since Samsung and Seagate have joined forces, the FAQ relating to this problem ( has vanished. That link now goes to Seagate's home page. Searching the site yields no useful information about the problem.

So let me spare you some trouble. You can see the old FAQ entry in Google Cache. The direct download link to the firmware is here: (583kb). The zip contains one file - F4EG.exe

Next step is running it. The FAQ mentions bootable media but fails to provide any further instructions. It has to be a clean MS-DOS shell and that might be tricky for many of you (it was for me), so these instructions will help you create a bootable USB key with a Windows 98 version of MS-DOS on it. Note that the instructions mention 'CTXXM.EXE' - if you replace that word with 'F4EG.exe', then that's all you need to do.

Now it's simply a matter of restarting your machine, boot from the USB key, type "f4eg" and let the patch do it's work. When it's done, turn off the PC (don't restart - turn it completely off) and leave it a few seconds before booting up again.

My biggest gripe here is that Seagate's web site has tossed away extremely important information for owners of these Samsung drives. I found out the long way. I hope these instructions save someone from wasting a few hours trying to piece it all together themselves.

Tuesday, January 26, 2010

Migrating Log4J properties to XML format

A few weeks on from my JBoss logging experience and I've worked out one thing so far. JBoss certainly seems to prefer the XML format over the old properties file format. So, I started to think about migrating the dozens of properties files we have to an XML format to reduce some of the pain.

I was not interested in doing that manually, and I was surprised to find that there was no tool to convert from the format to XML. At least, not one that I could Google.

So, in the spirit of open source, I bashed out a tool that does just that. Feed it a and it will spit out an equivalent XML file. It's a bit rough but seems to work for the cases I threw at it, so help yourself and let me know if it works for you. You need a JRE version 1.5 or better to run it, but I figure anyone using Log4J must surely have Java installed!

The project page, downloads and source are all over here:


Tuesday, January 12, 2010

Using your own Log4J with JBoss

If you use JBoss and Log4J, you've probably seen this error in JBoss' server.log file...

log4j:ERROR A "org.jboss.logging.appender.FileAppender" object is not assignable to a "org.apache.log4j.Appender" variable.
log4j:ERROR The class "org.apache.log4j.Appender" was loaded by 
log4j:ERROR [BaseClassLoader@73f05c08{vfszip:/home/foo/work/tools/other/JEE_Servers/jboss-5.1.0.GA/server/default/deploy/smallapp.war/}] whereas object of type 
log4j:ERROR "org.jboss.logging.appender.FileAppender" was loaded by [org.jboss.bootstrap.NoAnnotationURLClassLoader@1cb1c5fa].
log4j:ERROR Could not instantiate appender named "FILE".

The InterGoogleWeb suggests "remove log4j.jar from your war and all is sweet". That's true and it will fix the immediate problem. The webapp class loader loads org.apache.log4j.Appender from your application's copy of Log4J, and loads the custom JBoss appender org.jboss.logging.appender.FileAppender from a different classloader. In these multi-classloader scenario's it is appropriate to get a ClassCastException if two classes are in different loaders, even if they are normally assignable (this is enforcing Class Namespace Isolation). Removing your log4j.jar removes Log4J in the webapp class loader, leaving JBoss's class loader to happily load these by itself and the problem appears to go away.

But, since when is Log4J part of the JavaEE spec? Remove Log4J at your own peril, otherwise you'll one day deploy to Tomcat, or Resin, or any number of app servers and get ClassNotFoundException's or NoClassDefFoundError's because you did not package Log4J!

So what's happening here. Consider this fairly typical scenario.
  • Your application uses Log4J and packages it up in WEB-INF/lib. 
  • You have a file which is on the classpath, but not in your application's WEB-INF/classes (or don't have a at all). 
  • You let Log4J auto-discover this properties file, rather than hard-code it or set a system property. 
This means when Log4J goes and executes Thread.currentThread().getContextClassLoader().loadResource(""), it will load your configuration, right?


In JBoss 5.1.0 GA (and possibly other versions), what you get is the bundled in jboss/bin/run.jar. That JAR is higher in the class loading hierarchy than your copy of the properties file, so it will always be picked up first. And you got it, the FileAppender that JBoss's declares is the special "org.jboss.logging.appender.FileAppender" class, the same one mentioned in the error message.

So what to do?
  • You can try using the XML format for the log4J configuration, because Log4J looks for log4j.xml before it looks for This appears to work OK.
  • You can rename the inside the "run.jar" to something else, and give JBoss a system property to use the changed configuration file. Not sure if this would have other effects though
  • You can package up in WEB-INF/classes and that seems to work. But then you can't change your logging config without re-deploying your app 
  • Use Logback and ditch Log4J. Logback provides Log4J adapters, and it works really well. Thats a higher impact change to your app though
  • You can probably ignore the error if you don't care about logging, it relates to your application, not the server.
If I find a solution I'll update this post, but if you have one feel free to share!

Sunday, October 18, 2009

100% unit test coverage in Java: The smells of perfection

Unit testing is defined as testing one class in complete isolation. One approach to isolate your test is to use mocks and stubs, where those mocks and stubs act in a predictable way. Over time, we have gone from rolling our own to using cool libraries like EasyMock and Mockito. With these tools, and applying the now-ubiquitous IoC pattern, it is not unreasonable to expect test coverage of 100%.

Despite these tools being invaluable to unit testing, they have limitations that can lead to smells. Three of the biggest limitations have led to some smells that have persisted over several years. Here are my three biggest frustrations caused by today's mocking frameworks:

Cannot mock static methods
Proxy-based mocking tools generally do not support the mocking out of static methods. These tools mock classes by extending them with dynamic proxies, and since static methods cannot be extended, they can't be mocked either.

Solution: To test statics, I've used the Boundary pattern (sometimes called Repository) to defeat static cling. How else are you going to mock out a call to Calendar.getInstance()?

Every class that has a static method we want to mock out is wrapped in a boundary. So for our Calendar class, we will write a CalendarBoundary that is injected in to the client The CalendarBoundary contains non-static methods that delegate to the static equivalents. So, now anyone wanting to create a Calendar can do so by using the CalendarBoundary, and that can easily be mocked.

Smell: These boundary classes are somewhat contrived and unnatural. I have to declare them as explicit dependencies to my class - thats not always a bad thing but it can lead to strange-looking constructors that take a lot of boundaries. It's also more code to maintain and test. I could generate them, but it's easier to just call Calendar.getInstance() where I need it!

Cannot mock final classes or final methods
In a similar vein to the above, proxy-based mocking tools cannot extend final classes (like java.lang.Class) or final methods (like Calendar.getTime()).

Solution: The boundary pattern, or a class wrapper, can be used in these cases. For Calendar, I can write a class called CalendarWrapper that decorates a Calendar instance, provides the same method signatures, but the methods are not final. I need to provide delegation methods for every method I plan to use.

Smell: First of all, writing such boundaries manually is arduous. Calendar is a big API, and I really need to implement all of the methods and delegate to the real Calendar underneath. Sure, I could be smart and generate this code, but my API is still strange to look at and use. Some would even call it broken.

If my API uses CalendarWrapper instead of Calendar, how will users of that API react?
Dev A: "Why does this Widget class return a CalendarBoundary and not a Calendar?"
Dev B: "Oh, it's because we needed to test some final method on Calendar."
Dev C: "Right.... but I need a real Calendar so my JSP tag can render it."
Dev B: "Umm...."

You could write methods to get the real object and set the real object, but that's even more code. And what value is all this code really adding here? Isn't one aspect of well tested code a well designed API? I'd suggest this is not what they had in mind.

Cannot mock new instance creation
When we need to create a new instance of an object, usually it is so we can interact with it in some way. Imagine I am creating a new Widget, and I want to assert that this widget gets passed to a collaborator after being initialised in some way.

Solution: One approach is a mix of stateful testing and interaction testing. If it's a simple case, I can use a stateful test to see if the Widget that gets returned is in some state that I expect it to be in. Maybe the method I'm testing creates a new Widget and passes it to another collaborator, in which case I can use something like Mockito's ArgumentCaptor to test the state of the object in-flight.

Another solution is to avoid the "new". We could have some sort of generic Object Factory that creates new instances, so

Widget widget = new Widget();


Widget widget = objectFactory.newInstance(Widget.class);

I can then get a mock objectFactory to return a mock Widget instance.

Smell: The first approach can lead to very lengthy and awkward test cases. I've rarely seen a mxture of interaction (mock/verify) testing work well with stateful (assertEquals) testing without being horribly confusing. And a confusing test is not helping anyone understand what's going on!

The second approach looks contrived and confuses developers not used to seeing it, or understanding it must be done in the first place. Secondly, the ObjectFactory gets uglier when the constructor takes parameters. And last but not least, the code becomes fragile to refactoring - how would your IDE add/remove/change a constructor param if I am using reflection to create the new instance?

Until recently I thought these limitations were unavoidable due to the nature of the Java language. Happily, I might be wrong. Powermock is an addon to EasyMock and Mockito. It promises to fill the gaps - mocking out static methods, final methods, new instance, as well as a raft of other things previously not possible using EasyMock or Mockito.

I plan to have a look at PowerMock soon and write soon about my experiences, and hopefully eliminating these very annoying smells.

If you've used PowerMock I'd like to hear about it! If not, come back soon and I'll hopefully have something up about it.

Saturday, February 14, 2009

Ipsedixit - Java unit testing with less code

As software engineers, we don't want to spend a lot of time performing repetitive tasks. Repetitive tasks in coding can almost always be solved through some software solution. Smart IDE's, tools for refactoring, and libraries to assist database access are some examples of how software helps reduce duplication.

Unit testing has gotten a lot of focus in the last few years, primarily from the Agile/XP practice of Test-Driven Development. Arguably, TDD is now mainstream practice, not some crazy XP idea that only those mad Agile evangelists espouse on their blogs. In terms of tool support, we have JUnit, TestNG, EasyMock, JMock, and a lot of other tools to help us write and run tests.

While thats all great, there is still the issue of test data. I've seen a lot of unit tests contain a lot of setup code just to create suitable test data.

Consider a reasonably simple JUnit4/EasyMock test for a service, where we want our service to call out to a DAO which returns a particular object:
public class MyServiceUnitTest {
private MyDao myDao;
private MyService myService;;

private void setup() {
myDao = EasyMock.createMock(MyDao.class);
myService = new MyService(myDao);

public void canGetDataFromDaoAndReturn() {
Serializable primaryKey = "Don't care";
MyDomainObject myDomainObject = new MyDomainObject();


MyDomainObject result = myService.findById(primaryKey);
Assert.assertSame(myDomainObject, resut);
Now there's nothing actually wrong with this test, but taking TDD to the letter (fake it till you make it), the implementation of the class would actually be wrong. It would look like:
public class MyService {

public MyObject findById(Serializable id) {
return myDao.load("Don't Care");
The only way to force the correct implementation is to triangulate. That is, to perform the same test but pass in a different ID. That sounds like duplicated effort to me!

Whilst the example above is trivial and somewhat contrived, there is one other key point I'd like to make: I had to set up some test data to get the tests to work. There are two pieces of data that are used.

The first is the value of "primary key" that is an instance of Serializable. Actually, that's only partly true, it's a String as far as the test is concerned (the value is "Don't Care"). I could have chosen an Integer, or Float, or any type that implements Serializable. That's a fatal flaw in my test because I risk getting a ClassCastException if somewhere, someone assumes that ID's are String's and tries to cast it as such.

The second piece of test data is the instance of MyDomainObject. Lucky in our case it was easy to create, but what if MyDomainObject had several arguments on the constructor? Just for fun, let's put on the constructor a javax.xml.transform.Transformer, org.hibernate.Session and javax.mail.Session. And since the domain object is in a JAR that you don't have the source to, you can't just go and change the API :-)

public void canGetDataFromDaoAndReturn() {
Serializable primaryKey = "Don't care";
Transformer transformer = TransformerFactory.newTransformer();
org.hibernate.Session hibernateSession = SessionFactory.openSession(); // uh oh, this needs a hibernate config or it will throw an exception!
javax.mail.Session mailSession = ...// umm, how do I get one of these things without JavaEE?

MyDomainObject myDomainObject = new MyDomainObject(transformer, hibernateSession, umm...);

// continue testing ...
Not quite so easy to create one for the purpose of testing, is it? You could spend quite a bit of time messing around with this, just to set up your test data. Not only is that tedious, but when the next person looks at the test they'll need to interpret all that setup code, which makes the test harder to understand.

Enter Ipsedixit. Ipsedixit is a tool that takes care of thinking up that test data for you. It even knows how to "instantiate" interfaces and classes, and provides them to your test, so in the example above it doesn't matter what MyDomainObject needs on the constructor.

Using Ipsedixit, lets have a look at the unit test again...
public class MyServiceUnitTest {
private MyService myService;
@Mock private MyDao myDao;
@Arbitrary private MyDomainObject myDomainObject;
@Arbitrary private Serializable primaryKey;

private void setup() {
myService = new MyService(myDao);

public void canGetDataFromDaoAndReturn() {

MyDomainObject result = myService.findById(primaryKey);
Assert.assertSame(myDomainObject, resut);
This has the advantage of the previous test in a number of ways:
  • I don't have to construct any test data at all. The @Arbitrary annotation will make Ipsedixit provide an instance for you.
  • I don't need to set up my mock. Ipsedixit can create one automatically using the @Mock annotation. You may also use Atunit for automocking, if you prefer.
  • My test code is reduced, meaning the intent of the test is clearer
For common Java types and primitives, Ipsedixit can provide random values. For example placing an @Arbitrary annotation before an int field will make Ipsedixit populate that field with a random number. String fields get random Strings. You can even customise how the random value is provided (ie, a string of a particular length, or a number in a certain range).

For other types, such as the ones in our example, Ipsedixit will provide a dynamic proxy. So, for the Serializable we get a JDK Proxy, and for MyDomainObject we get a CGLIB proxy. It doesn't actually matter what they are, but the point is that they are objects of the types we require.

Ipsedixit does not require any particular testing framework to run, but there are integration points for JUnit 3, JUnit 4, and Spring-Test. It would not be a hard ask to integrate TestNG and other frameworks either.

You might be surprised at how seemingly complicated tests can be simplified by introducing Ipsedixit. Give it a go today!