Thursday, November 26, 2020

Upgrading Ubuntu MATE from 18.04 to 20.04

The automated distro update feature of Ubuntu used to be very hit or miss.  It wasn't unusual to finish with a bricked system and have to start all over.  A while back, Canonical seemed to decide that this was too much trouble and removed it from Ubuntu.  For the last decade or so I got used to just doing a fresh install, maybe clearing the root file system if it was on a separate partition.

Now this feature is back so I thought I'd give it a try on my Ubuntu MATE 18.04 laptop.  I thought this would be a decent challenge.  It's a dual boot UEFI laptop with Windows 10.  It has a weird feature where as far as I can tell, it is hard-coded in the BIOS to boot /EFI/Microsoft/Boot/bootmgfw.efi.  (I've been copying /EFI/ubuntu/grubx64.efi to /EFI/Microsoft/Boot/bootmgfw.efi for 7 years.  Not making this up.  See Ubuntu – How to get the HP laptop to boot into grub from the new efi file).

This worked pretty good for me.  There was no trouble with the dual boot configuration.  

I needed to reinstall various things that are not part of the Ubuntu distro but that come from a PPA, things like Brave, Docker, Dropbox, Google Chrome, Percona, VirtualBox, but this was expected.

I needed to reinstall Postman using Snap.  I'm not exactly sure how I installed Postman in the first place.  I find Snap kind of annoying, why do we need two different package managers, apt and snap?  Before you know it, Linux will be like the Mac with half a dozen.  But anyway, this worked fine.  Chromium has also become a snap.

Hibernate (save to and resume from disk) can be tricky to set up, but the distro upgrade left my hibernate configuration alone and it continued to work, afterwards.

In the past I tried a technique to automate the re-installation of  packages from the old distro on the new one.  Example: What does the “apt-get dselect-upgrade” command do?  I didn't get good results from doing it this way because the inventory of packages changes a lot from one distro to the next and it still required some manual steps to get a collection of packages that would work in the new distro.  The distro upgrade takes care of all this for you now, keeping all the packages that are still supported, removing the ones that are not.

One thing the distro upgrade didn't catch was the Beats Audio configuration.  I don't think I bothered with this when I upgraded from 16 to 18 a couple of years ago, so that tells you something about how much I use audio on this computer.  Here is a page that outlines the basic approach: Beats Audio On Linux.  Here are the HDA Jack Retask settings that are working for my HP Envy 17t-j000:


Generally, I was pretty happy with this experience.  Expecting the distro install to figure out the Beats Audio configuration would be unreasonable since these pinouts are not documented anywhere.  Other than that, I probably spent no more than 2 hours on this, mostly updating the non-Ubuntu PPAs (not including downloading time, which was considerable, maybe another 2-3 hours).

I'm also feeling pretty upbeat about the computer itself, which is still soldiering on after over 7 years.  This is the longest I've ever had a laptop.  I'm not really feeling the obsolescence pressure to upgrade as before, which in the past has mainly come from inadequate memory.  The current laptop has 16 GB which is still plenty for the foreseeable future, assuming I don't start needing to work heavily with Kubernetes or something.   

My own attitude towards this has changed a lot, too.  It used to be like Christmas morning when I installed a new distro.  There's still a tiny bit of that, but mostly I just want things to work now, with a minimal investment of time and effort.  Part of the reason for that is my career got a lot more challenging after 2011 and I was spending more personal time either learning the new stack at work or cramming for the next interview, so the Linux hobby project had to go on the back burner.  Also, the changes these days are evolutionary rather than revolutionary.  Linux is still my favorite desktop OS, but it's mature now and a lot of the excitement is gone.

nvidia-smi now works out of the box for me, where before I had to manually load CUDA and experiment with a bunch of different nvidia drivers to get this to work:


Maybe I'll take another look at tensorflow...

Here's a few notable changes in Ubuntu 20.04:

  • No more 32 bit support.  My 'travel' computer is a 32 bit Dell Latitude 630, so I might have to consider other Linux distros if it's still working in 2023...
  • Nvidia drivers are included with the distro

Links:

Saturday, July 20, 2019

Uberconf 3

On Day 3 I went to Brian Sletten's Machine Learning sessions:

Machine Learning: Overview

"The term machine learning refers to the automated detection of
meaningful patterns in data."

Machine Learning: Natural Language Processing

Machine Learning: Deep Learning

Research into neural networks began in the 50s with the concept of emulating a neuron by summing a set of weighted inputs, with a response being triggered if the combined values exceeded a threshold.  This was known as a Perceptron.  A perceptron is a single-layer neural network.

Deep Learning refers to the use of neural networks that are more than one layer deep.

Machine Learning: TensorFlow

This brought things a bit more down to earth with the focus on a particular software implementation of machine learning algorithms.

This topic probably comes closest to something I'd be able to apply at work.  As I mentioned earlier, there was not a very strong lineup of Big Data topics.

TensorFlow uses GPGPU (General-purpose computing on graphics processing units) to accelerate processing.  I was aware that bitcoin mining software did this but this was the first time I saw this acronym or really became aware that GPGPU was being used for a lot of other things.  This is your excuse for getting a really expensive, top-of-the-line graphics card.  If we start applying machine learning algorithms at work, my guess is we will build on top of Spark MLib since we already have such a big investment in the Hadoop, Scala, Spark stack.  It's weird that Brian didn't mention this at all.

There's a JavaScript API called TensorFlowJS that allows you to run ML in your browser!  Your phone!  IOT devices!  And here's another thing you can debug in Chrome DevTools.

Links
Books
A few miscellaneous comments about the show in general...

Women are definitely better represented than in the bad old days.  I would say the percentage of women has increased from around 5% to 20 or 25%.

The Mac is still a popular choice and I didn't notice anyone else running Linux.

Ditch Gradle, maybe reconsider the whole corporate sponsorship thing.

Thursday, July 18, 2019

Uberconf 2

Today I did 5 back-to-back sessions with Venkat Subramaniam.  If you've never had the pleasure, he is a high-energy and fun presenter who seems to have mastered quite a wide range of software development topics.

Exploring Modern Javascript

This was a fast and fun presentation.  I'm up to speed enough on JavaScript that I was able to keep up and complete the labs in time.  Modern JavaScript has map, reduce and filter, cool.  I got a little more practice with Visual Studio Code.  Venkat was going pretty fast and asked us to take a short break and even so, he just barely finished, so this might be a longer session that was trimmed to 3 hours.

Creating React Applications

I didn't fare quite so well on this one.  Why do they schedule presentations during nap time?  Also, I really could have benefited from spending a couple of hours doing the React 'hello world' exercises beforehand, the way I did with Vue.  Venkat was still zooming through the material like the Energizer Bunny.  When we got to the first exercise I found myself just staring at an empty JavaScript file without a clue.  I finished up the lab at dinner (with the benefit of having the solutions in front of me).  This wasn't a total loss but I definitely could have gotten more out of it.

Taking polyglot programming to the next level with GraalVM

This is a very cool and promising new technology.  I didn't have any trouble bringing up "hello world" using the GraalVM javac and java and building the insanely fast native image.  Being able to debug other languages in Chrome DevTools is another cool feature.

A fire alarm livened up one of the afternoon sessions.  Never a dull moment at Uberconf!

Panel discussion

One of the better questions was 'What strongly held opinion have you done a 180 on?'  Several speakers had responses along the lines of 'I used to think it was all about technical talent and skill, but now I realize relationships, collaboration and soft skills are more important.'  One panelist said he had pulled a 180 on Microsoft ('I used to be an irritatingly passionate Apple fanboy').

Here are some of the books mentioned by the panelists:
There was an amusing moment where several of the panelists were describing their awkward manual systems (e.g. spreadsheets) for keeping track of books they want to read, books they have read, etc.  There was a barely audible comment from Brian Sletten 'goodreads.com'.  Goodreads is an awesome tool to manage your reading and share ratings and reviews with your friends about books you have read that I have been using since 2012 or so.  Be my goodreads friend.

Uberconf 1

This was the longest day, 7:30-20:00.

Highlights:

A Vue Perspective (3h) and Testing Vue with Raju Gandhi.
Based on my extremely limited exposure, Vue seems like the most elegant of the "big three" client side frameworks.  It's the framework that lacks a big corporate sponsor (Angular comes from Google and React comes from Facebook).

I used Visual Studio Code for the coding parts.  It feels like I am getting over the hump with this tool.  This free Microsoft IDE has really taken off in the last 2 years or so.  The thing about it that first caught my attention is its support for Spring Tool Suite 4.

Craig Walls' Securing Spring REST & OAuth was excellent.  I have used Spring Security at work a bit but was still feeling like it hadn't quite clicked for me, and this helped.  Oauth is a riddle wrapped in a mystery inside an enigma, but Craig makes it seem easy.

My final 'Uberconf after dark' presentation was Neal Ford 'Stories Every Developer Should Know'.  This was an entertaining collection of cautionary tales about projects that went disastrously wrong and lessons we can learn from them.  I never knew that Enterprise JavaBeans came out of something called the San Francisco Project at IBM.

Here's a couple of presentations from Spring One Platform 2017 about STS 4 and Visual Studio Code:
Spring Tools 4 - Eclipse and Beyond 
Erich Gamma at SpringOne Platform 2017 Visual Studio Code 


The "Keynote" with Hans Doktor of Gradle was just a naked infomercial.

Wednesday, July 17, 2019

Uberconf 0

Really impressive performance by the keynote speaker, Jessica Kerr.  All the audiovisual stuff died and she spoke from memory for 45 minutes or an hour.
The Origins of Opera and the Future of Programming Or: Collective problem solving in music, art, science, and software. appears to be identical or very close to the keynote speech.

Monday, July 15, 2019

Night before Uberconf 2019

Uberconf

It's been almost 5 years since my last NFJS event.

Back then I was complaining that there wasn't much new under the sun in the Java world.  The picture is a bit different today.  Java 8 and to some degree 9-12 have mostly addressed the concerns about the stagnation of the Java language itself.

The insane proliferation of JavaScript frameworks has shaken out somewhat with Angular, React and Vue emerging as the "big 3".  I work in AngularJS now and am feeling a bit less angstful about the whole client side than I did back in 2014.

One thing that is conspicuous to me in this year's lineup is quite a few more women presenters than in the past.  It will be interesting to see if there are more women attending this event which has traditionally been 95% male.

As always there are a bunch of agonizing choices to make.  Anybody got a Time-Turner?

A big focus at my employer right now is big data.  We are using Hadoop, Spark and Scala.  I think I first heard about Hadoop at NFJS 2010.  I see Scala on the schedule (cool language in its own right aside from big data) and a bunch of Machine Learning, but otherwise not that much.  I guess big data is ancient history these days and not that much of a focus in and of itself.

There's several talks about the recent Java releases and modular Java which kind of makes sense.  I gather a lot of users are used to sitting on the sidelines for the first year or so of a new Java release and aren't really up to speed on the new 6-month cadence.

Looks like Kotlin is a fairly hot topic.  I would kind of like to jump on this bandwagon but I'm still striving for Scala mastery... maybe next year.

There's a ton of Angular, React, Vue and JavaScript and I will probably be spending a bunch of time in these.  I'm doing better with client side than I have been for a long time but still see this as a weak area, especially with things moving so fast.

Craig Walls is going to be there with half a dozen or so Spring presentations.  Spring has been a special focus of mine for a long time.  In a way I'm working in Spring almost every day but all the new features like Reactive aren't that useful to my employer's business model, so I'm not really learning and getting hands on with something new every day.

I'm going to try to catch at least one of the GraalVM presentations, sounds cool and could be the wave of the future.  There's a lot of emphasis these days on performance and getting the fastest possible startup time for microservices.

Microservices, architecture, dev/ops, cloudy stuff, agile, security, soft skills...  It looks like NoSQL is pretty much played out as a topic.  There's still a couple of AMQP talks though Kafka is the new messaging hotness.

One last thing I'll mention, there's a talk called Web Components: The Future of Web Development is Here.  I went to a couple of talks about web components and polymer back in 2014, and, blown away by the clean, component-based model, wrote "The way most web development is done is going to completely change in the next year or so."  Well, that might be true, but not because of the widespread adoption of Web Components I envisioned.  But now, 5 years later, they really mean it!

Links

Saturday, October 27, 2018

Spring Boot, Spring Data JPA, jackson-dataformat-xml

At work, we had a RESTful web service application implemented in Groovy/Grails.  This probably started out as a SOAP web service and was then ported to Grails 1.1.1 (2009).  It worked, but had become almost unmaintainable due to the archaic tech stack.  Also, there was a desire to move away from Grails, which not many of the team knew, in favor of Java 8 and Spring.

This was about a one month project.  It built on the experience I got doing RESTful Web Service in Spring.

Since I was using Spring Boot, this web application could have run as a stand-alone Java app using the embedded servlet container support, and this is how I was debugging it at first, but the requirement was to have a drop-in replacement for the old Grails WAR file running in Wildfly, so that was an additional wrinkle.

The original service, which my program had to emulate closely, did not use HTTP methods in a RESTful manner, but instead mostly used GET and POST for everything.  This alone ruled out the use of Spring Data REST, so I needed to write controllers.

I started by creating a suite of integration tests which could be pointed at either the old or the new service.  This was invaluable for verifying that the new service performed identically to the old one.  Many problems in my implementation were exposed in time for me to correct them as a result of running these tests.

The application consists of a collection of endpoints, each providing read/write access to one table in the Oracle database.  Spring Data JPA really shone here.  I rarely needed to give much thought to the database access part and in the one case where I needed to do something outside of the method-name based query model (next value from an Oracle sequence), the native query escape hatch was there.  I think I like this better than GORM.  GORM tries to make everything easy for you but I always wound up spending a lot of time resolving issues with transactions, sessions and cascading... these issues just seem clearer and less hidden in Spring Data JPA.

For rendering the responses in XML, I used jackson-dataformat-xml.  See How to write an XML REST service.  At first I tried using the same classes for the database entities and the POJOs to be rendered into XML in the response, but the complexity of that soon got out of hand and I resorted to creating a separate set of DTO classes for the response POJOs.  This was the most unsatisfying part of the task for me, having this whole array of DTO classes with conversion methods ('from' and 'to')... it added a bunch of just infrastructure coding that had nothing to do with the business logic.

Going into Wildfly was mostly straightforward.  The one exception was authentication/authorization.  We had a custom Wildfly security domain which accessed LDAP for user data and Oracle for group and role data.  I was not able to find a generic way to support any Wildfly security domain with Spring Security.  I think the way to do this is with a custom authentication provider and user details service but the clock ran out on me and we went with something much simpler.

In the Wildfly environment I also needed to switch to using JNDI to resolve an Oracle data source configured in the container, but this is a breeze in Spring boot with

spring:
  datasource:
    jndi-name:    JNDI name of your data source here

in the application.yml file... this didn't take more than half an hour to figure out and make the change.

I had some difficulty at first getting the logging to work the way I wanted it to with the application deployed into a container.  I started out trying to keep the original Log4j 1.2.x configuration, but that doesn't seem to be very well supported in Spring Boot, and eventually I gave up and used Logback with a logback-spring.xml file.  This is one of those areas where there are a hundred different ways to do something and it's not always clear how to proceed.

Similarly, when it comes to configuring your Spring Boot application, there's such a wealth of features to support this, it's not always obvious what's the best way to meet a given requirement.  I wound up with two sets of application-(profile).yml files, one for running the application and one for running the integration tests

The service has been running in production for 3-4 weeks now.  This is the most ambitious Spring Boot and Spring Data project I have tackled and also marks an important milestone in our team's adoption of this technology.