I was thinking about how to best chronicle my career so far showcasing the variety of contributions I have delivered. After pondering, I came up with the idea of a graph showing my focus between the areas of software, data, cloud, and security.
I had the look I wanted in mind but wasn’t sure how to implement it. My first concept was a stacked bar graph with variable width bars depending on my time in the position. Stacked bar graphs were easy, I did not find many resources on variable width bars. The Python Graph Gallery led me to realize a stacked area chart would be a natural fit. By increasing the number of points, the shape would asymptotically approach a variable-width stacked bar graph. Using the number of months as the width gave me a visually pleasing slope angle at the transition points and was logical as well! Removing the percentile ticks on the Y axis and customizing label positioning on the X completed the graph.
See the Python notebook and JSON definition I used to make this graph.
My Tech Journey
This is some context you won’t find on my Linkedin profile.
My first introduction to programming came when I took AP Computer Science in 11th grade. Ever since then, I was hooked. When looking at colleges and scholarships, I came across a program with the NSA that would have been a scholarship, summer internships, and a job after college. I made it to the last round at Fort Meade including full-scope poly for TS/SCI clearance but didn’t get the offer. I still liked the cyber security angle. During college, I competed on our cyber defense team. Our team placed 2nd in the 2014 Mid-Atlantic Collegiate Cyber Defense Competition. During college, I did a 8-month full-time co-op with Babcock and Wilcox mPower on their small modular reactor building UI prototypes for control room operators. There were a variety of reasons the project did not make it over the finish line, including economic timing and regulatory hurdles, so the next summer I moved on to Genworth with an endpoint security internship.
After college, I started the IT Leadership Development Program at Genworth. This program included four 6-month rotations on different teams to broaden my horizons and help pick the area that was the best fit for roll-off. I started out in Richmond working on Box and OneDrive data loss prevention. Then I worked on the ETL team where I took fraud records from Hive and sent them to an ESB to ingest the data back into the transactional system and surface to underwriters. During this time, Atul Saurav was a great mentor teaching me a lot about data. Still thinking I wanted to stay a security guy, I moved down to Raleigh for an application security rotation on the Mortgage Insurance business unit’s security team. I’ve been here in Raleigh ever since! The cloud was a hot new topic at that time. After hearing there was a team working on AWS that had recently launched a new EMR-based analytics platform, I worked towards getting on that team. My thought was that this would be an opportunity to still keep a security lens but move earlier in the design process. I created a git repo with terraform labs that I worked on in the evenings to show I was serious about learning AWS. This was around the time AWS ALBs were coming out and the company was migrating off a Consul-based stack to native ALBs. So in my personal AWS account, I built a two-tier alb system for public and private services. When I presented my initiative to Chris, at that time Director of Architecture and now CTO, he gave me the opportunity to rotate on his team. At one point in the six month rotation as I was working through the various projects, he said, “As long as you keep hitting it out of the park, I’ll keep giving you more stuff to try.”
Roll-off time came, and I received an offer to stay on the Enterprise Architecture team. I was very thankful for the opportunity!
Mark
and
Mike,
the most senior individual contributors who had worked as a team of two for the previous 10 years, let me join them. It was a great learning opportunity to be around them during the early years of building out our cloud environment. Guiding principles were codification of everything, strong environment parity, and cross-environment/business context isolation. That team functioned as the
cloud center of excellence
and delivered big projects partnering with the various development teams. As the obviously junior member, my projects were more internal-facing platform elements, but there were some that started working on my influence skills across teams. One of those was leading the migration from DataDog to NewRelic (back when they had the strongest APM/distributed tracing offering) and driving adoption across all our development teams. We rolled it out successfully, and years later I am still the go-to SME on performance monitoring. Currently, I lead a cross-team monitoring focus group sharing best practices and helping others level up our monitoring across all our apps. Another project of interest in my early years was working with other teams (including HQ approvals) to build out our business unit’s own ADFS cluster primarily for federating to AWS. This was years before IAM Identity Center, so I built a utility site allowing us to use SAML roles for CLI and SDK access. The site was server-side rendered and built on Lambda and API Gateway. It parsed the SAML assertion and presented a similar screen to the main AWS sign-in, though with Bootstrap styling. Users could click a role and the AWS JS browser SDK would call STS for temporary credentials. These were rendered into a cross-platform set of aws configure
commands that could easily be pasted into a terminal to refresh the appropriate profile. I also worked on a proposed enterprise-wide API fronting a tokenization provider. Circumstances changed and that project was implemented at another time with another solution, but it was thinking through the use cases and writing that facade service. The Architecture team was the position I stayed in the longest at 3 years, trying to learn by doing and osmosis from the experienced people around me.
Our cloud footprint kept growing and reached the point where we needed a dedicated team to manage it. I was selected to bootstrap the new team. We started out as just the cloud team, but around that time (early 2021) also started building a cloud data engineering team. At the end of my time in the position, we had grown to a team size of 9 FTEs and 6 contractors. That included a Director of Cloud+Data and a Data Engineering manager. There have been jokes made about the number of people needed to replace me as my switching teams was what spurred Cloud Engineering to get their own manager and further expand the team.:)
Throughout the whole growth phase for both cloud and data, I was in the technical interview loop for every individual contributor candidate. In September of 2021, we announced our IPO on Nasdaq as Enact (ACT). I had my moment of fame with my youngest in Times Square!
Besides the organizational, cross-training, and coordinating angle, I also dove into the technical opportunities. We follow a strict infrastructure as code practice where all AWS resources are in Terraform. This includes both our ECS/Lambda services and infrastructure pieces. I made improvements to our workflow in both cases. I reduced 24 different Terraform ECS service deploy pipeline templates into 1 “standard” service type and a python-generated multi-region version. On the infrastructure project side, I stripped our wrapper scripts to the bare minimum while increasing variable derivation flexibility and reducing ongoing maintenance of script customization. My time on this team was nicely split into two 18-month stints of cloud focus and data focus. I was heavily involved in the design and buildout of our Snowflake data lake. We used Talend for data integration, orchestrating the ELT process, and tracking SQL-based lineage. I built out the development processes, repo structure, CI/CD, Snowflake environment, Snowflake security model, Talend AWS infrastructure, Eventbridge patterns, MLOps pipelines, documentation, and assisted with ETL design where appropriate. One of the big wins on the CI/CD front was adding SQL linting and static analysis with SQLFluff. That addition to the pipeline saved hours of data engineer time by speeding up cycle time to find certain errors instead of waiting for jobs to run. I had the opportunity to collaborate closely with the data scientists by being the interface between the two teams. I worked with the Data Science leads to develop their processes, create their sandbox environment, and help with last-mile model productionalization.
At the start of 2024, I felt the need to get back into software. There was plenty of scripting and software-adjacent coding going on, but I was missing the all-in experience. I transitioned from a hybrid Architecture team and a Platform Cloud+Data team to a true Product team when I joined Integrations. We are responsible for our external customer interactions. We own tracking and normalization layers and then hand off to other teams who do rate quotes, orders, and backoffice servicing functions. Today 80% of our business comes through integration channels managed by our team. Collaboration between all the teams is essential for a good customer experience as each one tackles a different angle of complexity. The integration team’s mission is to maintain a consistent API for all our 1,800 B2B customers. To that end, we participate in the MISMO standards organization, similar to the IETF for the web. MISMO focuses on standardizing business process touch points and data interchange formats between all parties in the mortgage process. Our corporate values are Excellence, Improvement, and Connection. As the face of Enact technology to our customers, we want to be customer-obsessed, just like Amazon’s first Leadership Principle.
Customer Obsession
Leaders start with the customer and work backwards. They work vigorously to earn and keep customer trust. Although leaders pay attention to competitors, they obsess over customers.
It has been a great time so far building things that I can see our external and internal customers use, whether that’s launching a new MISMO 3.5 Rate Quote API or a new query tool to provide support a unified view of our on-prem and native cloud-based integration systems. I look forward to continuing to grow in customer empathy, product mentality, technical leadership, and engineering practice.