I'm a full-stack software developer in Madison, WI specializing in Java web services and web applications. In my spare time I provide consulting services to various companies and find interesting ways to harness technology.
If you're looking for software services, let me know! If I'm not the appropriate person to take your project, I probably know a person or company that would be a good fit.
HydroPAD is a collaborative project with Dave Johnson. We wanted to create a unique analog user input mechanism that could inform digital interfaces. The result is a prototype contraption that uses flow meters and check valves with a Raspberry Pi to serve up water flow direction and flow rate to any interface that can consume websockets.
The first proof of concept implementation is a game where the player presses one of the two hot water bottles to move the on-screen player to the right, and the other bottle to move to the left. We came up with a number of unique games that could take advantage of the water-based input mechanism. The system is structured so that many HydroPAD devices (the PVC assembly) could be electronically linked together to form multiplayer cooperative or competitive games.
Fireball Finder! Alexa Skill
Fireball Finder! is an Interactive Voice Assistant (IVA) for Amazon Alexa. Fireball Finder uses the Jet Propulsion Laboratory's Solar System Dynamics API to find fireballs that will be close to Earth on a given date. Alexa users can say things like "Ask Fireball Finder to scan" to regressively search for fireballs up to three weeks in the past. When a fireball is found, Fireball Finder! tells users how far away it is, how fast it's moving, and how much energy will be expelled when it hits the earth.
For Alexa users who are just interested in massive fireballs, Fireball Finder! offers custom searches for fireballs with more or less than a given impact force.
Fireball Finder! is an Amazon Lambda Javascript function.
Inbound Asteroid 2 Alexa Skill
Inbound Asteroid 2 is an Interactive Voice Assistant (IVA) for Amazon Alexa. Inbound Asteroid 2 uses the NASA Near Earth Object API to locate asteroids, meteors, comets, and other natural objects close to the Earth. Alexa users can say a plethora of flexible phrases to activate the skill and ask about near-Earth objects for the current day, a day in the past or future, or less-specific days like "tomorrow" and "last Tuesday".
Inbound Asteroid 2 is the reincarnation of the original Inbound Asteroid skill. This second version is written as an Amazon Lambda Javascript function instead of running in a standalone web container like the original skill used. Inbound Asteroid 2 enjoys 5-star reviews.
Non-GMO Seed Marketplace
A referred client had been working on an idea for the past few years. "It's like FarmersOnly.com, but for seeds," he told me. The client wanted to define a business model that could bring non-GMO farmers and buyers together so everyone could reach their highest profit margins.
After many iterations of business model and software development, the client's business is now an online marketplace that connects farmers and contract entities. The web application shepherds both parties through an easy-to-use negotiation process and facilitates contract delivery/agreement. The application uses its integration with an email vendor to keep users up-to-date on all aspects of their buying and selling activities, including daily listing digests and reminders.
Mindwave for Neuroscience
My work with the Neurosky Mindwave Bluetooth headsets hasn't gone unnoticed. The owner of a wearables data aggregation platform (ActiveOS) contacted me after seeing my mindwave-bluetooth library. Their client, a professor of neuroscience, wanted 8 students per class to be connected to Neurosky Mindwave Mobile headsets. Over the course of the semester the students' brainwave data would be used to teach lessons about neuroscience concepts. The project was completed quickly and the professor is now teaching students with their own brainwaves.
Underwriter Guidelines Service
The Underwriter Guidelines web service allows applications to perform geography-based searches for insurance underwriting rules. This service makes it easy to inform underwriters of any underwriting rules that apply to an applicant's address based on state, county, zip code, FIPS code, or distance from the coast. Underwriting applications use this flexible service to abstract the business logic of geography-based underwriting policies away from the business of writing an insurance policy.
Underwriter Tearsheet
The Underwriter Tearsheet web application solves the problem of locating and correlating information about an insurance applicant's physical address. When an underwriter enters an address, the Underwriter Tearsheet auto-completes it, double-validates it as a real address, and displays a printable screen of aggregated information from multiple sources. Underwriter Tearsheet integrates with:
In addition, Underwriter Tearsheet logs all user activity and can provide an audit trail for all searches submitted. Managers can view the results of any given search just by going to a result-specific URL.
Distance to Coast Service
The KB Geo Distance to Coast web service brings fast, accurate results for insurance risk management. Web service consumers can provide a latitude/longitude pair or a street address to get the number of miles to the nearest coastline. The service incorporates NOAA geographic data with proprietary algorithmic preprocessing, making it possible to get results almost instantly.
The RESTful service introduces security to the 3rd-party risk web services world, offering SSL and authentication via predetermined auth tokens, IP binding, and domain binding.
Inbound Asteroid Alexa Skill/Google Assistant
Inbound Asteroid is a platform-agnostic Interactive Voice Assistant (IVA) for both Amazon Alexa and Google Home/Google Assistant. The platform is written in such a way that it's easy to add new functionality or additional platforms, such as Microsoft Cortana and Apple Siri.
Inbound Asteroid uses the NASA Near Earth Object API to locate asteroids, meteors, comets, and other natural objects close to the Earth. Inbound Asteroid users can say a plethora of flexible phrases to activate the skill and ask about near-Earth objects for the current day, a day in the past or future, or less-specific days like "tomorrow" and "last Tuesday".
The Inbound Asteroid Alexa skill and Google Home/Google Assistant action have since been taken down due to server costs, but generated 5-star reviews while active. During the skill's lifetime it encouraged a number of other developers to write similar skills using the same NASA API.
Attention Deficit Alarm
Tony Sebion's cyclic obsession with Kickstarter at one point yielded a Mindwave Bluetooth EEG headset. He joked about his tendency to stop paying attention in a conversation, and when I said "I bet we could find some technology to track that", he got the funniest look on his face.
Thus was born the Attention Deficit alarm. I consumed the Mindwave Bluetooth API to alter Tony with a *ding* sound file when his attention dropped below a preset threshold. When we demonstrated it at an intra-company presentation, he started *ding*ing like crazy...checking his email instead of paying attention :D
Shareholder Attention Display
Shareholder Attention Display was purpose-built for the 2015 Omni Resources Shareholders Meeting. Shareholder Attention Display built on the Mindwave Bluetooth EEG headset output, using a giant TV to display live attention/meditation graphs from four headsets on volunteers in the crowd. The speakers began challenging themselves to keep everyone's attention levels high; any dip in the graphs was greeted with laughter and jokes from the speakers. Of particular interest was the graphs during the announcement of the annual stock price evaluation - almost quadruple the previous year's!
My predeliction for the Mindwave Bluetooth EEG headset now established, I of course suggested an interactive presentation wherein we would hook kids up to the Mindwave headset, show pictures depicting different professions, and print out a graph showing which areas the kids' minds showed the most interest.
Career Affinity Meter
A sales guy at our company called me up one day. "[CEO] said I should talk to you," he said. "We're sponsoring a middle school career fair and need something engaging for that age group."
My predeliction for the Mindwave Bluetooth EEG headset now established, I of course suggested an interactive presentation wherein we would hook kids up to the Mindwave headset, show pictures depicting different professions, and print out a graph showing which areas the kids' minds showed the most interest.
It was a huge hit at the career fair. About 1500 kids showed up and nearly 300 got to try out CAM while their friends looked on. The Omni Resources booth was packed throughout the career fair, actually causing a bottleneck. Several kids were reprimanded for lingering at our booth past their allotted time. Some of the teachers and the business representatives from other booths got to try too! Later we got emails and handwritten letters from kids who really enjoyed using CAM, and many felt its measurements were a spot-on reflection of their interests. We even got a mention and photo in the Fox Valley School District newsletter!
CAM was my first go at a JavaFX application, built on plenty of Java Swing experience. CAM displayed the Mindwave headset wearer's attention and meditation levels in real-time while it flashed each profession image for three seconds and tracked the user's attention level during that time. When all career images were exhausted, it automatically generated a bar graph and sent it to the default local printer. We had a lot of fun testing CAM in the office, and found that the images themselves could be normalized to make the results more accurate.
Fans of Fury
Fans of Fury was a blast! As a long-time sponsor of That Conference, Omni Resources wanted to have an amazing booth for That Conference 2015. After much collaboration with Tony Sebion, Patrick Tsai, and Brian McCrary, Fans of Fury began to take form. The game: Use mind control to push balls into your opponent's goal.
Tony Sebion built an 8ft by 3ft by 3ft wooden box while Dave Johnson and I worked on the electronics. We wanted to have automatic scoring, a scoreboard, integration with conference attendee badges (for player IDs), four Mindwave Bluetooth EEG headsets - two in game and two "on deck", and a couple of safety features.
The final system architecture revolved around a Tomcat server running a Spring Boot-powered web app that acted as a command center. When a Raspberry Pi booted, it opened a websocket connection to the server, registered itself and its connected devices (scoring sensors and 1000kv quadcopter motors acting as fans), and waited for instructions via the socket. The server leveraged the bi-directional nature of websockets to both push independent fan speed commands to the Raspberry Pi for each fan and receive score events, which it then pushed to the scoreboard. The scoreboard web page opened its own websocket on load and listened for changes in player scores, player information, and movement of EEG headsets between "on deck" and in-game. Another developer built an iPhone app to allow player registration and control which headset powered which fan.
The project was able to leverage the mindwave-bluetooth library I wrote to parse the Mindwave headset Bluetooth streams and raise events. The source code for the Fans of Fury project is now open-source, so you can download it all at GitHub.
See more pictures of Fans of Fury at That Conference on Facebook!
Here's video from the Omni Resources booth at That Conference:
Icebreaker Mobile App
Icebreaker is a prototype project intended to explore the possibility of using Bluetooth 4.0 beacons to introduce strangers to each other. The idea revolves around broadcasting information to nearby devices that could then display user pictures and biographies to their owners.
After a successful proof-of-concept phase with both iOS and Android apps, the sponsoring organization unfortunately focused on other things and left Icebreaker by the wayside.
Many websites use SMS as part of a 2FA (2-factor authentication) scheme. As a developer, implementing an SMS notification feature for the first time can be challenging even when using enterprise platforms like Amazon Web Services. I was unable to find a definitive path to this goal in existing documentation, so this article skips boilerplate and takes a succinct approach to leveraging SMS notifications through AWS SNS.
Sign up for/enable SNS in your AWS Management Console
Attach the AmazonSNSFullAccess policy to your user:
In your Java project, add the AWS SNS SDK dependency (Maven pom shown here):
Create a authenticated client to consume AWS SNS. I made mine a singleton (AwsClientFactory): https://gist.github.com/steveperkins/71156551b0c040a14e3223d4fd916b48. Note that a BasicAWSCredentials object can be used with every AWS SDK I've encountered in the Lightside project.
Now you can use your new SmsNotificationService to send a message to a phone number:
new SmsNotificationService().send("+14147468200", "Lightside is here");
Shadow Things in the Amazon IoT platform abstract away the boilerplate behind communicating securely and reliably with the AWS IoT platform. Creating Shadow Things in the management console is easy - and that can lead to a haphazard herd of AWS IoT Things. Writing code to programmatically corral your devices the first time can be challenging even on the AWS IoT platform. I was unable to find a definitive path to this goal in existing documentation, so this article skips boilerplate and takes a succinct approach to discovering existing AWS IoT Things and finding out more about them.
Shadow Things in the Amazon IoT platform abstract away the boilerplate behind communicating securely and reliably with the AWS IoT platform. Once Things have been created, you'll want to integrate that communication with your existing infrastructure. AWS IoT provides plenty of options for listening to Thing state and requesting changes, but getting spun up the first time can be challenging. I was unable to find a definitive path to discovering and changing Thing state via the AWS IoT REST API in existing documentation, so this article skips detailed boilerplate and takes a succinct approach to retrieving a known AWS IoT Shadow Thing and updating it's state.
In your Java project, add the AWS IoT SDK dependency (Maven pom shown here):
Create a client of type AWSIotData using AWSIotDataClientBuilder, build a GetThingShadowRequest, and extract the response's payload (gist):
This nets you descriptive JSON string such as:
To change this Thing's state, just change the "desired" section to match the properties and values you need the Thing to reflect, and send off an UpdateThingShadowRequest(gist):
You can send just the "desired" section - no need to repeat the "reported" and "delta" sections. E.g. { "desired": { "watering": true } }
Lightside - AWS IoT
Lightside was a series of Internet of Things projects designed to leverage the power of the Amazon IoT platform.
Darth was a 3ft Darth Vader action figure that we rigged up with a motion sensor. When someone approached Darth, he would say one of 48 pre-recorded phrases. We enhanced this concept by connecting him to the AWS IoT platform so his cheeky phrases could be triggered by remote events.
With the success of our own convenience layer on top of the Amazon IoT C SDK, we rigged up an electric sit/stand desk to raise and lower based on AWS IoT messages. Further plans were made to create specific user profiles and raise and lower the desks based on the specific user and time of day.
Since we'd already gotten Darth Vader and an electric desk integrated with the AWS IoT platform, we placed motion sensors in each office in our building. Every motion sensor reported back up to the AWS IoT platform. I built a web application that read motion sensor state via the AWS IoT REST API to show which rooms were occupied. This data was shown graphically as a floor plan of the building with pulsating red dots where motion was sensed. We knew it was a success when we could visually track a person's movement through the building (with their consent, of course).
Secure AWS EC2 with a Free LetsEncrypt SSL Certificate
Using LetsEncrypt’s free CertBot with Amazon Web Services’ EC2 server instances (AWS Linux AMI) can be frustrating. CertBot (letsencrypt.org) is a great tool for spinning up SSL certificates but is not currently supported on AWS Linux. This post reveals the gotchas and shows how you can install a CertBot certificate on AWS Linux. It’s assumed that you already have an EC2 instance up and running and a domain name pointed at the instance. The domain name is required for the SSL certificate as LetsEncrypt blacklists AWS.
Open ports 80 and 443
In the EC2 console open Security Groups, select your instance, choose Actions > Edit inbound rules, and verify HTTP/80 and HTTPS/443 are both open. If you have any outbound restrictions, also open ports 80 and 443 outbound and allow the current IP address for https://letsencrypt.org.
Download the certbot-auto script and make it executable
If you don’t add the –debug parameter, CertBot will only display a message. It will not generate certificates.
Find the generated certificates
The CertBot script puts generated certificates in /home/ec2-user/letsencrypt/live/<your-domain>.
If you need to import the cert into a Java Key Store
sudo su
openssl x509 -outform der -in /home/ec2-user/letsencrypt/live/&t;your-domain>/cert.pem -out cert.der
keytool -import -alias <your alias> -keystore cacerts -file cert.der
The CertBot script makes /home/ec2-user/letsencrypt writable only by root, so you need to become root before converting the generated cert to DER format. Once in DER format, you can use the JDK's keytool utility to import the cert into your keystore.
If you're doing more than a simple test, you'll probably want to add a scheduled auto-renewal because Let's Encrypt certs are only valid for 3 months.
Amazon Alexa provides a voice-to-action framework for development of custom skills. Custom Alexa skills can do anything – order a taxi, consume an API, or just provide a conversation. The functionality itself is only a small part of building an Alexa skill, though. This post lays out some of the other factors to be prepared for when you think about building a custom Alexa skill - specifically, whether your skill fits well within the Alexa interaction model.
Will Your Skill Fit?
Before writing any code, it’s important to understand how your new Alexa skill will perform within the confines of the Alexa world. Some kinds of functionality and interaction don’t belong well and as a result seem kludgy or provide a frustrating user experience.
Does your skill need multiple user inputs to work?
Although Alexa interactions can be conversational, increasing the number of things the user is required to provide up-front can dampen the overall user experience. For example, a skill for purchasing concert tickets might be rarely used because it asks for the band name, date, location, price option, and seating all in one go. Imagine a user saying “Alexa, ask Concert Guru to find tickets for Disturbed in Madison WI on January 25th under forty-five dollars in the front row”. Since Alexa’s voice-to-text can be prone to misinterpretation, Concert Guru is providing five opportunities for their skill to fail completely in the eyes of the user.
If your skill can function with just some of the parameters you can ask the user for each one independently, which helps maintain a good user experience. There’s still a point where providing those inputs is tedious, though, so be aware of when you’re asking too much from the user before providing value back.
Can your skill respond in less than 5 seconds?
After invoking your skill endpoint, Alexa will wait up to 5 seconds for a response. If no response is received, your users hear an error message like “The Inbound Asteroid skill is not responding”. To prevent this, make sure any APIs your Alexa skill consumes are performant and will return their responses to your skill in time for your skill to respond to Alexa. If that’s not possible, your skill is probably not ready for implementation. The NASA Near Earth Object API that Inbound Asteroid consumes is fast enough to meet this criterion while still providing time for extra processing – otherwise we couldn’t have successfully implemented it.
Does your skill rely on privileged information?
It is possible to link a user’s Amazon account to an account in your organization’s system, but it requires a fair amount of extra work and additional user interactions. You can create a card that displays a link on the Amazon Alexa phone app which the user must click to create the link. Although this is a security necessity, it also means future interactions with that Alexa device will likely assume only one person uses your skill. Can you counteract this tediousness by providing some functionality without user account information? For example, your skill might allow a user to say their order number to hear its status, but require user account information before being able to change the shipping address. This helps the user get value from your skill even if they haven’t yet linked their account with your organization.
Amazon Alexa provides a voice-to-action framework for development of custom skills. Custom Alexa skills can do anything – order a taxi, consume an API, or just provide a conversation. The functionality itself is only a small part of building an Alexa skill, though. This post lays out some of the other factors to be prepared for when you think about building a custom Alexa skill - specifically, where to host your skill.
Where can your skill live?
Once you’ve determined that your skill is feasible, you have another decision to make. Custom Alexa skills can be housed in two places: an Amazon Lambda function or an external server. Your decision should be based on your specific situation, but here are some clear factors:
Lambda Pros:
Much easier and faster to get started
Deploy-debug-redeploy development cycle is much faster
Javascript console messages are automatically logged to CloudWatch
Lambda Cons:
Web interface is clunky
Problems are easily obscured
Cost-per-execution model scales quite poorly
External Server Pros:
Existing infrastructure can be used to host many skills without proportional cost increase
Full control of code, security, etc and access to internal resources (on-premise databases, internal APIs, etc)
Logging can be verbose and integrated with an existing logging mechanism (Splunk, etc) without new costs
Deploy-debug-redeploy development cycle is slow (without extra tools)
Given how the Interactive Voice Assistant (IVA) market is shared by both Amazon Alexa/Echo/Tap/Dot and Google Assistant/Google Home, it makes sense to spent skill investment on a single codebase that services both IVA platforms as well as future IVA platforms without additional proportional cost and effort. Deploying your skill to an external server makes this possible. When building the Inbound Asteroid skill my goal was to keep both long-term costs and development effort low, so I chose to use an external EC2 server.