Halfway there

It’s a little surreal to think that the year is halfway through. There’s so much left to do!

There’s a lot going on but I can’t share until after the fact. Until then I’ll leave you with a picture of my gorgeous nails


Stand up

On a recent project, we hit a minor setback where a system was delivered with components that didn’t play nicely together. How did this happen? One team member’s point of view (me) was that a strategy for interoperability was talked about and I felt we were all on the same page on how we were to proceed. My instinctual reaction at discovering the incompatibility of the components was to feel frustration that “I was under the impression that we’d talked about doing X so why was Y done” and that additional unplanned development was now a part of an already-tight deadline. That’s my first reaction and it’s okay to feel that way. However, stopping at that reaction doesn’t address the deeper issues. The other part to this story is examining the processes or environment that led to the design and delivery of a system where two key components would not play well together. That’s a decent-sized “Oops” to have happen and I want to learn to prevent this from happening again.

At $currentCompany, we do Agile/Scrum with all the requisite ceremonies – daily standups, demos, et cetera. Standups are intended to help the team be aware of what everyone else is doing, help others get unblocked, and discuss any issues. I really like this explanation of what stand up are about from Jason Yip:

Stand-ups are a mechanism to regularly synchronise so that teams…

  • Share understanding of goals. Even if we thought we understood each other at the start (which we probably didn’t), our understanding drifts, as does the context within which we’re operating. A “team” where each team member is working toward different goals tends to be ineffective.
  • Coordinate efforts. If the work doesn’t need to be coordinated, you don’t need a team. Conversely, if you have a team, I assume the work requires coordination. Poor coordination amongst team members tends to lead to poor outcomes.
  • Share problems and improvements. One of the primary benefits of a team versus working alone, is that team members can help each other when someone encounters a problem or discovers a better way of doing something. A “team” where team members are not comfortable sharing problems and/or do not help each other tends to be ineffective.
  • Identify as a team. It is very difficult to psychologically identify with a group if you don’t regularly engage with the group. You will not develop a strong sense of relatedness even if you believe them to be capable and pursuing the same goals.

The description above captures the ideal state and I think we do well on the “Identify as a team” and “Share problems and improvements” bullet points. We joke at standups and commiserate over issues/things. However, the “Coordinate efforts” and “Share understanding of goals” seems to be where things went left as it were. In particular, I think when “understanding drifted”, standups would have been a perfect place to bring it up in order for others working on various parts of the system to be given the time to assess how their understanding of their piece changes. I’ll keep noodling on this as I continue to introspect.


Debugging a basic TypeScript app in Visual Studio Code

I’m learning TypeScript and one of the most important ‘to-do’s when learning a new language is setting up your debug environment or debugging.

Here’s how to get a bare-bones app (e.g. a helloworld.ts file that compiles down to a helloworld.js file in the workspace folder) setup for debugging. Pre-requisite to following the steps below: you have already setup your computer to run a TypeScript application successfully.

  1. Create a folder for your TypeScript HelloWorld application and initialize the folder using tsc –init (this creates the tsconfig.json file).
  2. By default (version 1.30.1), the tsconfig.json file generated does not enable source maps. A source map is a mapping from the TypeScript file to the generated javascript file (typescript transpiles to javascript). As a result, in order to debug the TypeScript code, the source map attribute needs to be true.
  3. To enable debugging via VS Code, you’ll need to create configuration for that. VS Code uses a special file called launch.json to instruct the IDE on how to debug. Per the official Microsoft documentation on debugging, you simply create one via the IDE by clicking on the Configure gear icon on the Debug view top bar. You’re not done yet though; the autogenerated launch.json file requires the following modifications:
    1. an absolute path to the TypeScript file to be debugged,
    2. absolute paths to the generated JavaScript files after transpilation from TypeScript, and
    3. setting the sourceMaps attribute to true (it’s not clear to me if this is enabled by default but better to be explicit here).
  4. Here’s an example of what my debug configuration looks like after step 3:
    “type”: “node”,
    “request”: “launch”,
    “name”: “Launch Program”,
    “program”: “${workspaceFolder}/helloworld.ts”,
    “sourceMaps”: true,
    “outFiles”: [“${workspaceFolder}/helloworld.js”]
  5. The workspaceFolder is a predefined variable you can use to construct full paths to files. In my view, using this would be preferred to hard-coding the absolute paths. Here’s the list of other predefined variables for further exploration.
  6. Now, you can add some arbitrary breakpoints to your TypeScript code, launch the configuration you just created, and observe that your breakpoints in the TypeScript file are being hit!
  7. Some issues I ran into:
    1. The value for the “program” option was incorrect i.e. pointing to a file that didn’t actually existing.

      Attribute ‘program’ does not exist

      I feel this error could be more friendly i.e. indicating the file could not be found or something similar. So if you get this error, verify that you don’t have a typo in the file name.

    2. if you use relative paths for the file location, the error you’ll receive contains a lot of guidance on how to rectify this!

      Attribute ‘program’ is not absolute


And that’s about it! Here’s my barebones helloworld TypeScript app in github and happy debugging!


JetBrains promotion

If you’re a Java developer, odds are very high that you use IntelliJ IDE for development. Professionally & personally, I use the IntelliJ but until today, I was reliant on the Community Edition for my personal needs.

Today and for a very limited time, Jetbrains is offering 50% all plans! I pulled that trigger so fast that my mom would’ve been impressed. 🙂

https://blog.jetbrains.com/blog/2018/07/30/celebrate-this-friendship-day-with-jetbrains-and-unwrap-your-presents/ (you’re welcome. :D)

Hibernate Query Language

One really cool thing about using the Spring Boot framework is how easy it is to setup entities for database operations. Hibernate is the ‘persistence layer’ that Spring uses to achieve this and this framework comes with something called ‘HQL‘.

Hibernate uses a powerful query language (HQL) that is similar in appearance to SQL. Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance, polymorphism and association

With HQL, you can construct queries using your database entities instead of SQL. It certainly feels more declarative than constructing SQL queries. For example, here’s an ER diagram of a personal project (cleaning up some things before I make the repo public).


To fetch all Restaurant records which had critical violations (severity of 3), I constructed the following HQL query.

    "select r from Violation v inner join v.inspectionReport ir inner join ir.restaurant r where v.severity = 3")

Of course, by doing this, you cede control to Hibernate on exactly how the query is constructed. For more, check this official Spring guide on using JPA (JPA is a spec and Hibernate is a JPA implementation) or this Udemy course on Spring + Hibernate.

Another cool thing you can do with Hibernate is creating a virtual column. This virtual column can then be used in any HQL queries you need. derived properties. Here’s an example of my use case (creating a derived property):

    "(SELECT COUNT(ir_violations.id)\n"
        + "  FROM ir_restaurants INNER JOIN \n"
        + "  ir_inspectionreport ON ir_restaurants.id = ir_inspectionreport.restaurant_id  INNER JOIN\n"
        + "  ir_violations ON ir_inspectionreport.id = ir_violations.inspection_report_id \n"
        + "  WHERE ir_violations.severity = 2 and ir_inspectionreport.restaurant_id = id\n"
        + "  GROUP BY ir_violations.severity)")
private Integer nonCriticalCount;

and using this derived property:

  value =
      "select new com.janeullah.healthinspectionrecords.domain.dtos.FlattenedRestaurant"
          + "(r.id,ir.score,r.criticalCount,r.nonCriticalCount,r.establishmentInfo.name,ir.dateReported,r.establishmentInfo.address,r.establishmentInfo.county) "
          + "from InspectionReport ir inner join ir.restaurant r ORDER BY r.establishmentInfo.name ASC"

Pretty nifty stuff!

Upgrading to Spring 2.0.1 (my notes)

A personal project of mine using Spring 1.5.12-RELEASE and I figured I’d bite the bullet & upgrade. Before you embark on this upgrade, review the release notes, the configuration change log, a migration guide, and other articles on the subject.

These are changes that  affected my project:

  1. Context path updates: server.contextPath is now server.servlet.context-path
  2. Actuator updates: Previously it was /app-context/health, it’s now /app-context/actuator/health. Note: to restore previous behavior, set
  3. Server: org.springframework.boot.web.support.SpringBootServletInitializer relocated to org.springframework.boot.web.servlet.support.SpringBootServletInitializer.
  4. Spring repositories:
    1. findOne no longer takes the id (e.g. long) but the actual entity. I replaced this with with findbyId (a built-in convenience method to the repository)
    2. I previously could pass a List<T> to the save method. You are now required to use saveAll which is more appropriate
    3. My JpaRepository classes previously returned an Iterable for methods like findAll. This method now returns a Collection (e.g. List<T>) of entities (additional info here).
  5. Database:
    1. Connection pooling – Hikari is now the default connection pooling mechanism. Avoid using DataSourceBuilder’s automatic configuration with 2.x (see issue and additional documentation on configuring your datasource).
    2. Postgresql – I was previously on version 9.0-801.jdbc4 and I switched to the latest to resolve  the following error ‘ PSQLException: Method org.postgresql.jdbc4.Jdbc4Connection.isValid(int) is not yet implemented.’
  6. Testing:
    1. Mockito – *.runners.MockitoJUnitRunner is now in *.junit.MockitoJUnitRunner



Docker Adventures – Palantir Docker plugin

After my first post, a lot happened. First, the moment I moved my Dockerfile from my $projectDir root into a folder (docker/myapp/Dockerfile to declutter things), “everything” stopped working. A lot of tutorials and even the Transmode gradle docker plugin assume that your Dockerfile is in a standard location ($projectDir/Dockerfile) and if you step outside that, you’ll learn an awful lot about Docker build contexts and working directories.

To set some context, my application is a Spring Boot Java application (version 1.5.12-RELEASE). This became pertinent for me as a lot of the official Spring guides have defaulted to 2.x.x versions which have significant differences (notably the use of bootRepackage going away in favor of bootJar or bootWar).

So. I took a step back and yanked out the Transmode gradle plugin from my repository & embarked on learning how to do it all manually. A lot of the tasks listed below are very nicely handled by docker-compose but I figured that I needed to know the essentials first in order to troubleshoot when things go sideways. Here are the manual tasks:

  1. manually building my Docker images (the spring application + postgres).
    1. For the postgres image, I did not need to build one from scratch since there is an official image published on Dockerhub). You can read more about configuring the official image according to your needs.
  2. manually tagging and pushing both images to a remote registry (I practiced with both Dockerhub and the Heroku registry),
  3. manually setting up the network (this is a nice-to-have since apps in the same network are implicitly able to communicate with each other. Of course, the server needs to talk to the database so this was the first step)
  4. manually running the postgres application (since the database needs to be up before the server)
  5. manually running the server application

Once I verified that I could repeatedly perform steps 1 – 5, I decided to investigate the Palantir Gradle Docker plugin since I’d like to eventually get away from the lengthy Docker commands and move to gradle tasks for the entire process. The Palantir plugin is more robust than the Transmode plugin and in my opinion, it’s a little clearer to understand what is happening.

For the rest of this post, my focus will be on the image generation step and I’ll share my commands from the perspective of one whose Dockerfile is not in the typical location i.e. $projectDir/Dockerfile. Instead, my Dockerfile is located at $projectDir/docker/myapp/Dockerfile. I’ll outline the manual steps and the corresponding gradle task configuration (using the Palantir Gradle Docker plugin).

Building your Docker image for a 1.5.x Spring boot app

The basics of the Dockerfile (what is it, etc) are on the Docker docs. The biggest issue I ran into was getting the COPY task to work the same whether the step was manual or via the gradle task. The recommendation I can share is to use a placeholder argument for the location of the jar file to be copied. When manually generating your image, you will then pass in the value of the ARG. Here’s the full command which was run at the root of my project.    Compare with the Spring guide configuration:

docker build --build-arg JAR_FILE=build/libs/AppServer-0.0.1-SNAPSHOT.jar -t myapp-server:beta -f docker\myapp-server\Dockerfile .

The -t flag sets the friendly name of the image you intend to build. -f sets the location of the Dockerfile and the last argument is a dot (this is important to set the working directory for Docker. Update this accordingly). Here's what my Dockerfile looks like.

In the official Spring guide to dockerizing your Spring app, you can also see how the ARG is declared and referenced in the COPY task later. Note: the official Spring guide is written with the Palantir Gradle Docker plugin and Spring 2.x.x in mind. So, I had to do things a little differently mainly:

  1. When setting up the Palantir Gradle Docker plugin, you need to specify the maven url (it's not in mavenCentral) so that bit me. I also opted to use the 'traditional' way of applying the plugin since the newer and more expressive way seems to have some kinks. As an aside, I've created a PR to include this bit of information to the guide.
  2. The gradle configuration in the guide could do with a bit more explanation i.e. the why. Nevertheless, with a lot of trial/error and reading various docs (Spring, Docker, Stackoverflow, etc), I was able to come up with a configuration that works for my Spring Boot application. Here are my settings
    docker {
        dependsOn bootRepackage
        name "$dockerImageName:$dockerImageVersion"
        tags "$dockerImageVersion"
        //located in the build context which is a folder in build/docker
        dockerfile file("docker/Dockerfile")
        //directly reference the Jar file in the Dockerfile
        def artifact = "$projectName-${projectVersion}.jar"
       // copies artifact + Dockerfile to the build context
       files "$libsDir/$artifact", "$projectDir/docker/$dockerImageName/Dockerfile"
        //passing in the jar file location via --build-arg key=value
        buildArgs(['JAR_FILE'     : artifact])
        pull false
        noCache true

    Compare with the Spring guide configuration:

    docker {
        dependsOn build
        name "${project.group}/${bootJar.baseName}"
        files bootJar.archivePath
        buildArgs(['JAR_FILE': "${bootJar.archiveName}"])

  3. The main diferences are:
    1. What the docker task depends on. In my case, I depend on the bootRepackage task since within my file config setting, I'm fetching the generated jar file from my $libsDir to be copied into the Docker build context along with the Dockerfile from its original location.
    2. Another difference is the use of bootJar task which, for those of us still on 1.5.x, is absent.
    3. In the Palantir readme, you'll observe how the output of a task is fetched like so
      files tasks.distTar.outputs, 'file1.txt', 'file2.txt'

      . You may be tempted to do something like tasks.jar.outputs but your  Spring application won't be packaged correctly although image generation will succeed (running a jar generated this way will result in an error message about the missing main class). My solution was to depend on the bootRepackage task in order to retrieve the full jar file from the expected location.

  4. I've attempted to comment the code in my docker task configuration but I welcome improvements for clarity/correction. A lot of the information I've gleaned is largely from trial/error. Finally, to run the gradle tasks, here's a verbose command I use which comes in handy when troubleshooting gradle tasks failures. gradlew clean docker --console=plain --stacktrace

Running your Docker app

Okay, we've got the image generation out of the way. It is important to validate that you can run the application before calling it a day.  For my purposes, I need to validate that my image was generated correctly and that I could run it with all the needed environment variables (which includes api keys, secrets, and the like). To avoid leaking my secrets, I created a .env file in a location outside of my repository. I was able to run & validate my app two ways: manually and via docker-compose. Note: the Palantir plugin also has a way to programmatically setup your docker-compose file but that's a work in progress.

  1. Running my image manually with multiple arguments.
    1. TL;DR - docker run command
    2. Within my docker run command, I'm setting the container name ( --name myapp-server), linking it to my postgres container ( --link myapp-db unnecessary since they both belong to the same network but eh) , mapping volumes ( v C:/Users/jane/Repositories/Docker/myapp/logs:/usr/local/tomcat/logs), associating the container to a network ( --network myapp-network), setting up my ports, and passing my .env file ( --env-file ../Docker/myapp/environmentvariables-server.env ). Compare with the Spring guide configuration:
      docker run -p 8080:8080 -p 8000:8000 --env-file ../Docker/myapp/environmentvariables-server.env --name myapp-server --link myapp-db --network myapp-network -v C:/Users/jane/Repositories/Docker/myapp/downloads/webpages:/usr/local/tomcat/downloads/webpages -v C:/Users/jane/Repositories/Docker/myapp/logs:/usr/local/tomcat/logs myapp-server:beta

    3. If all is right with your image, you should see the normal Spring startup messages. If you run into issues like the missing main class, verify you didn't fat-finger the commands like someone I know (mainly when using CMD in the Dockerfile, each argument has to be its own entry to the array) and verify your jar is a valid jar. You should be able to run your jar like any normal java app e.g. java -jar -Dfoo=bar myapp.jar.
  2. Running my image with docker-compose is super simple! Here's the latest incarnation of my docker-compose file.

In summary, it was fun learning how to generate and run my images the 'hard' way but I'll be sticking to docker-compose from now on. 🙂


Adventures with Docker

At work, we make heavy use of Docker containers for local development and I figured I should apply those learnings into my personal projects. I will say that this would be slightly harder if I didn’t have a reference project (i.e. work project setup) to reference.

My project is a Spring Boot Java application and requires a Postgresql database. Here’s how I built it up piece by piece:

  1. I started out learning to ‘dockerize’ the Java web service which is pretty straightforward (my sample Dockerfile which has links to sites that I referenced). With your Dockerfile, you can generate the image using the standard docker commands (docker build -t org/yourapp:latest .). However, I wanted to get more comfortable with gradle so I used the Transmode gradle plugin. This allows me to generate my image using a simple gradle task (my sample gradlew $dockerImageBuildTaskName).
  2. Then, I learned to configure the postgresql container (already present on the DockerHub registry) i.e. changing the port, setting my user/password, etc.
  3. Once I was able to individually build each image, I saw the need for ‘orchestrating’ the setup of the server + db which led me to docker-compose (my sample docker-compose file). Using this configuration-based approach takes the headache out of I’m also making use of env files since my application takes in api secrets and the like (aside, do NOT commit items like these to git. I have these env files located outside of my application folder and I use relative paths to access them)
  4. Dockerhub allows you to create a free repository to hold images and pushing your image is a straightforward two-step process: docker tag local-image:tagname reponame:tagname and docker push reponame:tagname. You can create your docker-compose file in a way that your image gets built every time you do ‘docker-compose up’ but I opted to separate out the build process from the startup.

Overall, I’m pretty pleased with my progress so far. I’m certain I haven’t followed all the best practices regarding Docker container configurations but that’s on my Trello to-do list!

Useful links:

  1. https://github.com/docker/labs/tree/master/developer-tools
  2. https://spring.io/guides/gs/spring-boot-docker/
  3. https://thepracticaldeveloper.com/2017/12/11/dockerize-spring-boot/#Dockerizing_Spring_Boot_or_any_executable_jar_file
  4. https://github.com/Transmode/gradle-docker
  5. https://docs.docker.com/get-started/

Signature calculation for AWS

I recently learned about ElasticSearch as a means of adding search to my android app which uses Firebase. From the android app, I am issuing a HTTP request (super easy thanks to Retrofit) to AWS ElasticSearch but this request has to be authenticated.

Amazon has some detailed documents on how to do this yourself but I’m waving the white flag after spending 3+ hrs debugging a mismatch between the signature I generated and what Amazon generated.

So, I’ll save you the trouble and share actual source code from Amazon that worked for me.

  1. http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-examples-using-sdks.html
  2. Source code showing a signed POST request: https://s3.amazonaws.com/aws-java-sdk/samples/AWSS3SigV4JavaSamples.jar. ‘Unzip’ this jar file if you have 7-zip. Otherwise, use the “jar xvf $file.jar” command
  3. The classes in the ‘auth’ & ‘util’ folders contain the classes of interest so copy them over to your project.
  4. A ‘runnable’ class you can inspect is PutS3ObjectSample.java which shows how it’s all put together.