It’s a little surreal to think that the year is halfway through. There’s so much left to do!
There’s a lot going on but I can’t share until after the fact. Until then I’ll leave you with a picture of my gorgeous nails

Jane's bits and bytes
It’s a little surreal to think that the year is halfway through. There’s so much left to do!
There’s a lot going on but I can’t share until after the fact. Until then I’ll leave you with a picture of my gorgeous nails
On a recent project, we hit a minor setback where a system was delivered with components that didn’t play nicely together. How did this happen? One team member’s point of view (me) was that a strategy for interoperability was talked about and I felt we were all on the same page on how we were to proceed. My instinctual reaction at discovering the incompatibility of the components was to feel frustration that “I was under the impression that we’d talked about doing X so why was Y done” and that additional unplanned development was now a part of an already-tight deadline. That’s my first reaction and it’s okay to feel that way. However, stopping at that reaction doesn’t address the deeper issues. The other part to this story is examining the processes or environment that led to the design and delivery of a system where two key components would not play well together. That’s a decent-sized “Oops” to have happen and I want to learn to prevent this from happening again.
At $currentCompany, we do Agile/Scrum with all the requisite ceremonies – daily standups, demos, et cetera. Standups are intended to help the team be aware of what everyone else is doing, help others get unblocked, and discuss any issues. I really like this explanation of what stand up are about from Jason Yip:
Stand-ups are a mechanism to regularly synchronise so that teams…
- Share understanding of goals. Even if we thought we understood each other at the start (which we probably didn’t), our understanding drifts, as does the context within which we’re operating. A “team” where each team member is working toward different goals tends to be ineffective.
- Coordinate efforts. If the work doesn’t need to be coordinated, you don’t need a team. Conversely, if you have a team, I assume the work requires coordination. Poor coordination amongst team members tends to lead to poor outcomes.
- Share problems and improvements. One of the primary benefits of a team versus working alone, is that team members can help each other when someone encounters a problem or discovers a better way of doing something. A “team” where team members are not comfortable sharing problems and/or do not help each other tends to be ineffective.
- Identify as a team. It is very difficult to psychologically identify with a group if you don’t regularly engage with the group. You will not develop a strong sense of relatedness even if you believe them to be capable and pursuing the same goals.
The description above captures the ideal state and I think we do well on the “Identify as a team” and “Share problems and improvements” bullet points. We joke at standups and commiserate over issues/things. However, the “Coordinate efforts” and “Share understanding of goals” seems to be where things went left as it were. In particular, I think when “understanding drifted”, standups would have been a perfect place to bring it up in order for others working on various parts of the system to be given the time to assess how their understanding of their piece changes. I’ll keep noodling on this as I continue to introspect.
I’m learning TypeScript and one of the most important ‘to-do’s when learning a new language is setting up your debug environment or debugging.
Here’s how to get a bare-bones app (e.g. a helloworld.ts file that compiles down to a helloworld.js file in the workspace folder) setup for debugging. Pre-requisite to following the steps below: you have already setup your computer to run a TypeScript application successfully.
I feel this error could be more friendly i.e. indicating the file could not be found or something similar. So if you get this error, verify that you don’t have a typo in the file name.
And that’s about it! Here’s my barebones helloworld TypeScript app in github and happy debugging!
If you’re a Java developer, odds are very high that you use IntelliJ IDE for development. Professionally & personally, I use the IntelliJ but until today, I was reliant on the Community Edition for my personal needs.
Today and for a very limited time, Jetbrains is offering 50% all plans! I pulled that trigger so fast that my mom would’ve been impressed. 🙂
https://blog.jetbrains.com/blog/2018/07/30/celebrate-this-friendship-day-with-jetbrains-and-unwrap-your-presents/ (you’re welcome. :D)
From searching online, I’ve found two ways to debug tests via gradle:
debug
property within the test
task to true and executing your gradle command without any additional flags--debug-jvm
flag to gradleIn both cases, make sure you create a debug run config (port 5005) and run said config after starting your tests.
One really cool thing about using the Spring Boot framework is how easy it is to setup entities for database operations. Hibernate is the ‘persistence layer’ that Spring uses to achieve this and this framework comes with something called ‘HQL‘.
Hibernate uses a powerful query language (HQL) that is similar in appearance to SQL. Compared with SQL, however, HQL is fully object-oriented and understands notions like inheritance, polymorphism and association
With HQL, you can construct queries using your database entities instead of SQL. It certainly feels more declarative than constructing SQL queries. For example, here’s an ER diagram of a personal project (cleaning up some things before I make the repo public).
To fetch all Restaurant records which had critical violations (severity of 3), I constructed the following HQL query.
@Query( "select r from Violation v inner join v.inspectionReport ir inner join ir.restaurant r where v.severity = 3")
Of course, by doing this, you cede control to Hibernate on exactly how the query is constructed. For more, check this official Spring guide on using JPA (JPA is a spec and Hibernate is a JPA implementation) or this Udemy course on Spring + Hibernate.
Another cool thing you can do with Hibernate is creating a virtual column. This virtual column can then be used in any HQL queries you need. derived properties. Here’s an example of my use case (creating a derived property):
@Formula( "(SELECT COUNT(ir_violations.id)\n" + " FROM ir_restaurants INNER JOIN \n" + " ir_inspectionreport ON ir_restaurants.id = ir_inspectionreport.restaurant_id INNER JOIN\n" + " ir_violations ON ir_inspectionreport.id = ir_violations.inspection_report_id \n" + " WHERE ir_violations.severity = 2 and ir_inspectionreport.restaurant_id = id\n" + " GROUP BY ir_violations.severity)") private Integer nonCriticalCount;
and using this derived property:
@Query( value = "select new com.janeullah.healthinspectionrecords.domain.dtos.FlattenedRestaurant" + "(r.id,ir.score,r.criticalCount,r.nonCriticalCount,r.establishmentInfo.name,ir.dateReported,r.establishmentInfo.address,r.establishmentInfo.county) " + "from InspectionReport ir inner join ir.restaurant r ORDER BY r.establishmentInfo.name ASC" )
Pretty nifty stuff!
A personal project of mine using Spring 1.5.12-RELEASE and I figured I’d bite the bullet & upgrade. Before you embark on this upgrade, review the release notes, the configuration change log, a migration guide, and other articles on the subject.
These are changes that affected my project:
management.endpoints.web.base-path=/
After my first post, a lot happened. First, the moment I moved my Dockerfile from my $projectDir root into a folder (docker/myapp/Dockerfile to declutter things), “everything” stopped working. A lot of tutorials and even the Transmode gradle docker plugin assume that your Dockerfile is in a standard location ($projectDir/Dockerfile) and if you step outside that, you’ll learn an awful lot about Docker build contexts and working directories.
To set some context, my application is a Spring Boot Java application (version 1.5.12-RELEASE). This became pertinent for me as a lot of the official Spring guides have defaulted to 2.x.x versions which have significant differences (notably the use of bootRepackage going away in favor of bootJar or bootWar).
So. I took a step back and yanked out the Transmode gradle plugin from my repository & embarked on learning how to do it all manually. A lot of the tasks listed below are very nicely handled by docker-compose but I figured that I needed to know the essentials first in order to troubleshoot when things go sideways. Here are the manual tasks:
Once I verified that I could repeatedly perform steps 1 – 5, I decided to investigate the Palantir Gradle Docker plugin since I’d like to eventually get away from the lengthy Docker commands and move to gradle tasks for the entire process. The Palantir plugin is more robust than the Transmode plugin and in my opinion, it’s a little clearer to understand what is happening.
For the rest of this post, my focus will be on the image generation step and I’ll share my commands from the perspective of one whose Dockerfile is not in the typical location i.e. $projectDir/Dockerfile. Instead, my Dockerfile is located at $projectDir/docker/myapp/Dockerfile. I’ll outline the manual steps and the corresponding gradle task configuration (using the Palantir Gradle Docker plugin).
The basics of the Dockerfile (what is it, etc) are on the Docker docs. The biggest issue I ran into was getting the COPY task to work the same whether the step was manual or via the gradle task. The recommendation I can share is to use a placeholder argument for the location of the jar file to be copied. When manually generating your image, you will then pass in the value of the ARG. Here’s the full command which was run at the root of my project. Compare with the Spring guide configuration:
docker build --build-arg JAR_FILE=build/libs/AppServer-0.0.1-SNAPSHOT.jar -t myapp-server:beta -f docker\myapp-server\Dockerfile .
The -t flag sets the friendly name of the image you intend to build. -f sets the location of the Dockerfile and the last argument is a dot (this is important to set the working directory for Docker. Update this accordingly). Here's what my Dockerfile looks like.
In the official Spring guide to dockerizing your Spring app, you can also see how the ARG is declared and referenced in the COPY task later. Note: the official Spring guide is written with the Palantir Gradle Docker plugin and Spring 2.x.x in mind. So, I had to do things a little differently mainly:
docker { dependsOn bootRepackage name "$dockerImageName:$dockerImageVersion" tags "$dockerImageVersion" //located in the build context which is a folder in build/docker dockerfile file("docker/Dockerfile") //directly reference the Jar file in the Dockerfile def artifact = "$projectName-${projectVersion}.jar" // copies artifact + Dockerfile to the build context files "$libsDir/$artifact", "$projectDir/docker/$dockerImageName/Dockerfile" //passing in the jar file location via --build-arg key=value buildArgs(['JAR_FILE' : artifact]) pull false noCache true }
Compare with the Spring guide configuration:
docker {
dependsOn build
name "${project.group}/${bootJar.baseName}"
files bootJar.archivePath
buildArgs(['JAR_FILE': "${bootJar.archiveName}"])
}
files tasks.distTar.outputs, 'file1.txt', 'file2.txt'
. You may be tempted to do something like tasks.jar.outputs but your Spring application won't be packaged correctly although image generation will succeed (running a jar generated this way will result in an error message about the missing main class). My solution was to depend on the bootRepackage task in order to retrieve the full jar file from the expected location.
Okay, we've got the image generation out of the way. It is important to validate that you can run the application before calling it a day. For my purposes, I need to validate that my image was generated correctly and that I could run it with all the needed environment variables (which includes api keys, secrets, and the like). To avoid leaking my secrets, I created a .env file in a location outside of my repository. I was able to run & validate my app two ways: manually and via docker-compose. Note: the Palantir plugin also has a way to programmatically setup your docker-compose file but that's a work in progress.
docker run -p 8080:8080 -p 8000:8000 --env-file ../Docker/myapp/environmentvariables-server.env --name myapp-server --link myapp-db --network myapp-network -v C:/Users/jane/Repositories/Docker/myapp/downloads/webpages:/usr/local/tomcat/downloads/webpages -v C:/Users/jane/Repositories/Docker/myapp/logs:/usr/local/tomcat/logs myapp-server:beta
In summary, it was fun learning how to generate and run my images the 'hard' way but I'll be sticking to docker-compose from now on. 🙂
At work, we make heavy use of Docker containers for local development and I figured I should apply those learnings into my personal projects. I will say that this would be slightly harder if I didn’t have a reference project (i.e. work project setup) to reference.
My project is a Spring Boot Java application and requires a Postgresql database. Here’s how I built it up piece by piece:
Overall, I’m pretty pleased with my progress so far. I’m certain I haven’t followed all the best practices regarding Docker container configurations but that’s on my Trello to-do list!
Useful links:
I recently learned about ElasticSearch as a means of adding search to my android app which uses Firebase. From the android app, I am issuing a HTTP request (super easy thanks to Retrofit) to AWS ElasticSearch but this request has to be authenticated.
Amazon has some detailed documents on how to do this yourself but I’m waving the white flag after spending 3+ hrs debugging a mismatch between the signature I generated and what Amazon generated.
So, I’ll save you the trouble and share actual source code from Amazon that worked for me.