Technology portfolio for Brian Mathiyakom

A silly caricature of Brian by ChatGPT
A silly caricature of Brian by ChatGPT

I'm Brian Mathiyakom, a technologist based out of the San Francisco Bay Area. I have about two decades of experience building platforms and teams.

I have contributed to platforms for:

I have worked with both small startups and large companies, serving as an engineer, manager, and architect. In addition, I briefly ran a software consulting business.

Currently, I am available as a consultant and leadership coach to run technical due-diligence, write software to help people, and guide engineering managers.

Outside of work, I'm a Muay Thai student and former musician. I love wine, coffee, and mechanical keyboards.

Writing (on Medium)

CI/CD for a chat bot

Originally written in 2019. In a previous post, I created a Chat Bot Auto Responder. That little project was written in Go. It was deployed directly to GCP as a Cloud Function via the gcloud CLI. Writing a chat bot auto responder for GroupMe I used two Cloud Functions, one for testing (responding to a sandbox channel), and one for the actual channel (production). After the initial launch, I pushed the code up to GitHub and wanted to use GitHub Actions as the CI/CD mechanism for future development. This post describes the CI/CD workflow I wanted and what I settled on. The Problem It is very likely that I will be the lone developer on this project for the foreseeable future. I want each new commit to cause tests to run and then deploy the code to the sandbox (DEV) cloud function for further manual testing. When the new code is stable-enough, then I’d like the code to be pushed to the production (PROD) cloud function. What I Wanted I chose GitHub Actions so I didn’t have to integrate with any other third parties¹. GitHub Actions allows us a way to create workflow(s) of actions that react to certain events that occur on a given repository. I wanted the following to occur whenever a commit was pushed to the repo: The GitHub workflow I wanted When a commit is pushed to the repo, then: The unit tests are run (via a golang docker container). If the commit is pushed onto the master branch (i.e. pull-request merged), then deploy to the PROD cloud function via the GCP CLI Action (“Dummy 1” in the image). Else if the commit is pushed to a non-master branch (i.e. a commit on a feature branch), then deploy to the DEV cloud function via the GCP CLI Action (“Dummy 2” in the image). The underlying DSL for the workflow looks like this: # main.workflow workflow "Test & Deploy" { resolves = [ "Test", "Master Only", "Non-Master Only", "Dummy 1", "Dummy 2" ] on = "push" } action "Master Only" { uses = "actions/bin/filter@3c98a2679187369a2116d4f311568596d3725740" args = "branch master" needs = ["Test"] } action "Test" { uses = "actions/docker/cli@8cdf801b322af5f369e00d85e9cf3a7122f49108" args = "build ." } action "Dummy 1" { uses = "actions/docker/cli@8cdf801b322af5f369e00d85e9cf3a7122f49108" args = "build ." needs = ["Master Only"] } action "Non-Master Only" { uses = "actions/bin/filter@3c98a2679187369a2116d4f311568596d3725740" args = "not branch master" needs = ["Test"] } action "Dummy 2" { uses = "actions/docker/cli@8cdf801b322af5f369e00d85e9cf3a7122f49108" args = "build ." needs = ["Non-Master Only"] } The problem is that workflows require every action to be successful. It does not support either/or branching. This means that the “Non-Master Only” and “Master Only” filter actions have to succeed. If either of them fail, then all dependent actions are cancelled (the deploy dummy actions are cancelled). What I Settled On I wanted to use one workflow to cover both master and non-master branches so that I could reuse as many actions as possible. I also wanted less noise in the Actions tab on GitHub; I didn’t want a “master” workflow to run when, most of time, commits would be pushed onto non-master branches. In order to get my desired CI/CD flow, I needed to create two workflows (that are required to live in the same main.workflow file). The GitHub workflow I ended up with The image shows the workflow for the master branch. When a push is made onto master, the non-master workflow looks like: The workflow for a non-master branch push Both workflows run on every single commit. The workflow DSL then becomes: # main.workflow workflow "Master: Test & Deploy" { resolves = [ "Deploy PROD", ] on = "push" } workflow "Branches: Test & Deploy" { resolves = [ "Deploy DEV", ] on = "push" } action "Master Only" { uses = "actions/bin/filter@3c98a2679187369a2116d4f311568596d3725740" args = "branch master" } action "Non-Master Only" { uses = "actions/bin/filter@3c98a2679187369a2116d4f311568596d3725740" args = "not branch master" } action "Test" { uses = "actions/docker/cli@8cdf801b322af5f369e00d85e9cf3a7122f49108" args = "build ." } action "Auth with GCloud" { uses = "actions/gcloud/auth@ba93088eb19c4a04638102a838312bb32de0b052" secrets = ["GCLOUD_AUTH"] } action "Deploy DEV" { uses = "actions/gcloud/cli@ba93088eb19c4a04638102a838312bb32de0b052" needs = ["Test", "Non-Master Only", "Auth with GCloud"] args = "functions deploy <CLOUD_FUNC_NAME> --project <PROJECT_NAME> --runtime go111 --trigger-http" } action "Deploy PROD" { uses = "actions/gcloud/cli@ba93088eb19c4a04638102a838312bb32de0b052" needs = ["Test", "Master Only", "Auth with GCloud"] args = "functions deploy <CLOUD_FUNC_NAME> --entry-point <FUNC_ENTRY> --project <PROJECT_NAME> --runtime go111 --trigger-http" } Note that the GCP secret key (GCLOUD_AUTH) is stored directly on GitHub (not in source control) and belong to a GCP service account that can only manipulate these two cloud functions. I was able to reuse the GCP authentication and test actions, though visualizing the two workflows by reading just the DSL is a bit difficult. This workflow runs tests, authenticates to GCP, and filters the git branch all in parallel. When either of these three steps fail, all the other steps are cancelled or fail also. Overall, not a bad setup. Deploying to DEV on every commit could trample over the work of others, but it’s okay since I’m the only developer 😅. ¹: Granted, I do like CircleCI’s product offering.

Writing a chat bot auto responder for GroupMe

Originally written in 2019. Chat bots are popular in the industry right now. They are used for customer service, devops, and even product management. In this post, I’ll dive into writing a very simple bot while dealing with an inconsistent chat service API. The Problem An organization that I belong to uses GroupMe as their group chat solution. When new members join the group chat (channel), then someone from the leadership team sends them a direct message (DM) welcoming them and asking them to fill out a google form survey. Since we’re not always active in the channel, we run the risk on missing a quick turnaround time from someone joining the channel and us reaching out to them (attrition is a problem). I felt that this process could use some automation. The Constraints I wanted a lightweight solution (i.e. don’t change the process too much). The solution, if it involved tech, should be cheap (a.k.a. cost $0). The channel user activity was relatively low (mostly used for announcements and some bursts of chatter). The solution should still feel “high-touch”. It should feel personal when user contact is made. Solution: Make an Auto Responder When new members join the channel, have something automatically DM that person, greeting them and asking them to fill out our survey. The question then becomes, how? GroupMe has a notion of chat bots, server-side configured entities that join and listen to all the messages and actions that happen in a given channel. For each event that happens, it sends a callback (via HTTP) for you to reason about. A possible auto responder could work like this: Sequence diagram showing how the an autoresponder could interact with GroupMe Straight-forward. How do we deal with the constraints? Lightweight: The process stays the same; user joins, we send them a message. Cheap: We own the auto responder service, so we should host it somewhere where costs are free (GCP / AWS / Heroku micro tiers are all viable). Scale: The cheapest cloud hosting tiers are sufficient from a throughput and minimal response time standpoint. High-Touch: If we can send them a message as one of us, instead of the bot, even better. The first-launched version of this setup is written in Go and runs as a CloudFunction in GCP¹. The CloudFunction was estimated to be free given our traffic rates. The choice to use Go was because there are only a few languages that CloudFunctions support: javascript (via node), python, and go. I find no joy in coding in javascript. I hadn’t written a lick of python in many years. I didn’t know Go (still don’t), but thought it could be fun to learn a bit of it for a small side project. Issues The GroupMe bot sends a callback request for every bit of activity in the channel that it’s listening to. The callback payload from the GroupMe bot looks like the following: { "attachments": [], "avatar_url": "", "created_at": 1302623328, "group_id": "1234567890", "id": "1234567890", "name": "GroupMe", "sender_id": "2347890234", "sender_type": "system", "source_guid": "sdldsfv78978cvE23df", "system": true, "text": "Alice added Bob to the group.", "user_id": "1234567890" } I need enough information from this notification to: deduce whether this is a “user joined the group” event if so, get a unique user identifier so that I can message the user directly There wasn’t an “event type” for the payload, so I used regular expressions on the text attribute to infer whether a payload corresponded to the two possible join events (a user joined the group on their own and a set of users were invited to the group an existing group member). I thought that the user_id was the id of the user that joined the group. I was wrong. In the wild, the user_id is the id of the user that created the text. So if a user sends a message to the channel, the id belongs to that user. For “join events” the user that wrote that “message” to the channel is the system (GroupMe) which has the special id of 0. There’s no point in sending a direct message to the system. Without a user id, I could not send a message to that user through the GroupMe /direct_messages API. I needed to get the user id(s) another way. One option was to look up the group’s member list from the /groups/:id API. I would have to match up the user’s name against the list of members (though names are also mutable). That API also doesn’t support any member list filtering, sorting, or pagination. I didn’t want to use an API where its response body would grow at the rate of users being added to the group. A second option would be to not rely on the GroupMe bot events at all. There exists a long-polled or websockets API for GroupMe. I could have listened to our channel on my own and reacted to its push messages. The problem with this approach is that the payload looks basically like the bot’s payload. [ { "id": "5", "clientId": "0w1hcbv0yv3puw0bptd6c0fq2i1c", "channel": "/meta/connect", "successful": true, "advice": { "reconnect": "retry", "interval": 0, "timeout": 30000 } }, { "channel": "/user/185", "data": { "type": "line.create", "subject": { "name": "Andygv", "avatar_url": null, "location": { "name": null, "lng": null, "foursquare_checkin": false, "foursquare_venue_id": null, "lat": null }, "created_at": 1322557919, "picture_url": null, "system": false, "text": "hey", "group_id": "1835", "id": "15717", "user_id": "162", "source_guid": "GUID 13225579210290" }, "alert": "Andygv: hey" }, "clientId": "1lhg38m0sk6b63080mpc71r9d7q1", "id": "4uso9uuv78tg4l7csica1kc4c", "authenticated": true } ] Also I didn’t want to have my app be long-lived (hosting costs), since join events aren’t as common as other channel activity. Note that there isn’t an API to get an individual user’s information (aside from your own). I chose a third option. When a “join event” is sent from the bot, I would ask for the most recent N messages from that channel, match up the join event message id with the message id for that event in the channel (they’re the same!), and we the message data to get the user id. Take a look at a responses from the :group_id/messages API: { "response": { "count": 42, "messages": [ { "attachments": [], "avatar_url": null, "created_at": 1554426108, "favorited_by": [], "group_id": "231412342314", "id": "155442610860071985", "name": "GroupMe", "sender_id": "system", "sender_type": "system", "source_guid": "5053cc60396c013725b922000b9ea952", "system": true, "text": "Bob added Alice to the group.", "user_id": "system", "event": { "type": "membership.announce.added", "data": { "added_users": [{ "id": 1231241235, "nickname": "Alice" }], "adder_user": { "id": 234234234, "nickname": "Bob" } } }, "platform": "gm" } ], "meta": { "code": 200 } } } Surprisingly, each message has an optional event attribute with a type and applicable user ids! I wish the event was included in the callback from the bot. The updated sequence flow looks like: Updated sequence diagram showing how the auto responder actually works with GroupMe Extras The GroupMe API requires a token for authentication. This token is stored as an environment variable on the CloudFunction and is not stored in version control. Basic stuff. There is a single http client used across invocations of the cloud function. This allows me to use connection pooling so that I can avoid multiple SSL handshakes when talking to the GroupMe API. Intentional Holes This setup works as intended, but there are cases that I purposefully don’t account for. It may be possible for GroupMe to send duplicate events and the responder does not care. It does not store data on whether it has responded to the same event. I haven’t seen duplicate events yet, but even if they occurred, I deemed “users receiving dupe messages” as OK (low traffic channel). It is also possible that GroupMe’s bot API may not send events at all. There is no reconciliation process to check that every join-event has been handled. ¹: I originally wrote all of this in Elixir/Phoenix and ran it in GCP AppEngine. The problem was that in order to run Elixir code, I needed to run on AppEngine’s Flex Environment, which is not a free tier. Sad, because Elixir is my current favorite language.

Generating custom resumes for job applications using Terminal UIs

I’m in the middle of job search. Like many in the technology industry, I was laid off in late 2023. I thought getting some extended time away from a job would be a nice change of pace. Refreshing even. Not the case. I had this nagging anxiety or worry about what my next role would be: Do I continue doing more of the same (honing existing skills is a good thing)? Do I try to jump up a level with a broader set of responsibilities? Do I switch careers again since I’ve jumped between individual contributor and manager roles in the past? Fortunately, I have some savings and an awesome and employed wife (thank you, Jesus) so I have space to think through these meta questions. Maybe I’ll write about how I approached them someday. Writing resumes for each job application Right now, I have a grand total of 94 job applications to companies in a variety of fields that drew my attention. I applied to roles as an software engineer, a manager/director, and to executive roles (at smaller startups). This means that the accomplishments in my submitted resumes have also been tailored for each application. I presume that it’s wiser to highlight engineering accomplishments when applying for an engineer role. And vice versa for management roles. I don’t want my resume to be too long either; recruiters spend mere seconds reviewing a single resume. With tips from friends and easy-to-customize designs from Canva, I created separate resumes for each role. And each resume fit into one page. That’s not a bad result when trying to simplify a nearly two decade career in tech! While Canva mostly worked, it is also a WYSIWYG app. That means that if I wanted to reposition, add, or remove content, I needed to then adjust the alignment and spacing for the surrounding content and export the result as a PDF. It’s a straight-forward but also tedious process. How my previous resume was maintained 🛠 My previous resume before using Canva was written in Markdown (MD) and CSS. I would write the content in MD and style the layout with CSS. I used Pandoc to transform the MD+CSS into a PDF using weasyprint as the underlying PDF engine. pandoc --pdf-engine=weasyprint -s --section-divs \ -c style.css -o resume.pdf This setup was also OK. Editing text files is better for my workflow. The CSS adjusted for spacing issues automatically as the content changed. And CSS properties like page-break-inside made for easy reading (i.e. keeping all accomplishments for a given job on the same page). Except I’m not a great designer so the overall design wasn’t as “clean”. And as I would find out later: PDF engines don’t understand many CSS properties. And I would have had to keep separate MD files if I wanted to subset my resume into role buckets (engineer-focused, manager-focused). Tinkering and Ideas 🤔 I’ve been interested in the Terminal User Interfaces (TUI) lately. A number of them have been featured in online forums that I visit. They reminded a lot of old bulletin board games (quite nostalgic). So, I had the idea of making a resume generator using a TUI as the interface. Writing a resume generator The resume generator needed to combine the things I like from having a resume in Canva and in Markdown. Given my professional employment history and full list of accomplishments per job: Allow me to easily pick which accomplishments to include in the generated resume via the command-line. Support CSS for styling of the generated resume. Support PDF as the final output format of the generated resume. Keep the generated resume to one printed page¹. Allow me to easily edit logistic info in the resume (skills, contact info, education, etc). I decided to try out Charm as the TUI framework. Specifically, their huh library seemed like a good starting point for how I can do the “accomplishment picking”. The first end-to-end iteration of the generator worked in the following way: Display the accomplishment multi-select form (the accomplishments were hard-coded into the app). I would select accomplishments I wanted. Perform variable substitution of the accomplishments into a HTML file. This was done with Go Templates since the app was written in Golang. Charm is a library for Go, so choosing Go as the base language was a choice made for me since I decided on Charm in the first place. Use Pandoc to transform the resulting HTML file into a PDF. Fun with PDF engines, or not… I recreated my favorite Canva resume template in HTML + CSS. I used Tailwind to help me style it. But the resulting PDF look at all like the HTML. Even when I pre-processed a static CSS file to include the Tailwind properties I used in the HTML (via tailwindcss), Pandoc and the PDF engine just didn’t properly interpret the CSS properties. I could have spent more time trying to make Pandoc happy: like rewriting the CSS without Tailwind. That would have been more prudent. But I found having Tailwind available made making layout adjustments easier. So I considered the alternative of dropping Pandoc. What is really good at rendering HTML/CSS and exporting to PDF? Web browsers. I got the idea to use a headless Chromium instance to render the resume HTML and then have it export the page to PDF. I used a Playwright library for Go to do this. Aside from being a more heavyweight process (launching a browser), it worked really well. Open-sourcing After showing this to my wife, she asked if she could do this with her resume too. That began the journey of “refactoring so that someone else can generate custom resumes for themselves”. You can find the code at GitHub. A screenshot of selecting accomplishments in the resume generator The current workflow is now: Read resume data from data.json. Display the accomplishment multi-select form with Charm/huh. Select accomplishments I wanted via keyboard. Perform variable substitution of the accomplishments into a HTML file via Go Templates. Export the template into HTML in /tmp. Launch Playwright and have it open the HTML file in /tmp. Ask Playwright to export a PDF of the HTML page into the current directory. I can highlight relevant accomplishments in my resume on per job-application basis by generating a resume specific for the job application. Screenshot of example resume PDF I considered using some form of GenAI to take in the job description, my full list of accomplishments and write me a resume. But I didn’t want to work with the AI to adjust its writing style to mine nor did I want to figure out how to extract its output into my HTML layout². Maybe I’ll have the energy to play with this later, but now isn’t the time. If you’re job hunting right now and are feeling overwhelmed, then I know the feeling. You’re not alone 💛. ¹: I didn’t end up implementing the keep-on-one-page feature. Didn’t need it in the end. ²: I would also want to do all of this locally and not send ChatGPT (or friends) too much of my information.

Building a web app for a church that costs $0.02/month to keep running

This is a story about how I designed and built a web app for a church that had a $0 engineering budget. The church had Bible-study groups that met throughout the week. They were called small groups. Before, when folks (lookers) wanted to join a small group, they contacted a coordinator. The coordinator worked with group leaders to find out which groups had openings and could accommodate the time and location for the lookers. Sounds like a two-sided market, huh? It is. Eventually, the coordinator left and the church was without staff to handle this task. The church had a few options of how to proceed: Wait for another person to volunteer to be a coordinator. Do nothing; existing groups would continue but new people would not be able to find or join a group aside from word-of-mouth (e.g. meeting a group leader). Figure out a new process to deal with this problem I was not in a position of decide what the church would do, but I was willing to propose a solution. Can we streamline? I opted for the “figure out a new process” approach. My idea was to remove the middleman (the coordinator) and set up lookers with group leaders directly. Group leaders know what is happening with their groups and should be empowered to reply to messages from lookers. I wanted to create a web app where lookers could search groups to potentially join. Lookers could filter groups by: audience (i.e. young adults, families, general, men, women) day of meeting location (i.e. city) language topics discussed (i.e. Bible study, book study) When lookers found the group(s) they were interested in, then they could send a message to the group leaders (via form) expressing their interest. I also wanted a way for folks in the church to apply to be group leaders themselves. What was I was working with? Like many problems, we have to understand our constraints: The group information (meeting times, leaders info, locations, description, etc) didn’t change often. It was mostly static data. The church’s online traffic didn’t exceed 100K requests per month. I imagine few churches do. There were less than 100 small groups in the church. Growth wasn’t a goal for the church, so it was unlikely to have an explosion of groups. Even 1000 groups is negligible from a technical standpoint. And there are experiences that I wanted to be smooth for everyone involved: A snappy, responsive, and pleasant experience for lookers trying to find a small group to join. A straightforward way for lookers to message group leaders without giving personal group leader information away (weirdos on the internet). A simple way for group leaders to reply to a looker (i.e. click this email to reply to a looker). A reliable way to audit requests from lookers for each small group. A way for church staff to view and edit small group details. A simple enough way for group advisors to review small group leader applications. An option I considered but decided against The church had a static site powered by Squarespace along with a church management service. It was possible to create a set of pages for each group and use the church management service for the form submissions. This could have been nice because the cost of Squarespace was already accounted for by the church. The trade-offs would be maintenance and UX: Updates to the general layout would have to be applied to each page individually. Adding or removing common fields from an existing page would also have to be done to every other page individually. Squarespace didn’t have Layout pages at the time. Maybe this isn’t an issue anymore. Implementing filters would be a pain. Imagine writing custom javascript that contained group-filtering logic, the group data (so that you can manipulate which group where shown to the looker), and UI components that would have to be included in a <squarespace:script>. Yikes, no thanks! Filtering could also have been removed as a feature, but scrolling through list to find something that fits your needs without filtering is such a bad experience. The forms UX/UI available in the church management service were also really ugly; they looked corporate and uninviting. So, off to building our own Through hundreds of commits and small releases, I ended up with the following architecture and technology choices¹. Runtime architecture diagram This image represents the services used when a looker visits and interacts with the site. I picked services that were both feature-rich-enough and had generous free-tier pricing: Cloudflare managed DNS and page caching. Netlify acted as the CDN and static site host. Netlify Forms processed all submitted forms and handled spam detection. Zapier orchestrated background work by piping form submissions to different services depending on the use case. Postmark sent group interest requests from lookers to group leaders as transactional emails. Linear stored group leadership form submissions as new issues/tasks. Sentry managed production error reports. Wait, it’s a static site? The web app itself, as you can infer, was a static site generated using NextJS. The source of truth for the small group data (details about each group, leader info, group advisor info) was stored in Airtable (another service with a generous free tier). Designated church staff edited the data through Airtable. Here’s the fun part: when the app was deployed, the individual small group pages would be dynamically generated at build time and served as static pages in production. App build and data-fetching diagram This allowed for snappy response times for lookers (minus the NextJS bundle download time). Church staff could easily manage the data set without needing a custom admin tool. The problem with this setup is that it relied on app deployments to update the site content. In reality, group leaders pinged me directly when they needed to change their group information. This was rare; maybe once every two months. The GraphQL service was implemented as a Google Cloud Function that could spin down to 0 instances. The cold-start time for the Function could be greater than 3 seconds but that didn’t matter. That time only affected the build time of the app, not the experience of the site visitor. Gitlab was the source control manager. A combo of Gitlab CI sand Netlify powered the deployment pipeline: ran tests (unit and end-to-end), linter, Lighthouse analysis, and image compression. Isn’t this setup complicated? This set of technology choices and architecture is fairly complicated for what was being built. The constraint of not spending money led me to a bit of a Rube Goldberg machine architecture in that there are a number of moving parts. If there was a more elegant option using existing church-approved tools, then I would have gone that route. Honestly, I could have advocated for moving the entire church website to another provider (or even something like Wordpress). That would have taken more time to convince everyone. And this custom-built route was more fun. Maybe less pragmatic. In this scenario, that’s okay. How did everything turn out? This entire setup balanced machine costs, people time for the church staff, and runtime performance for the small group lookers. The monthly csost was $0.02 per month: the cost of the Google Cloud infrastructure. Screenshot of the small group filtering on the web app ¹: I also used Gatsby for the frontend in the initial release. Its “hydrate-at-build-time” is the inspiration for the architecture. Other tech used and abandoned included Vercel (for site hosting) and Bulma (for CSS). They are lovely technologies but I needed to keep costs down. Vercel didn’t have form submission support at the time; Netlify did.

A systems-approach to department reorgs

Tell me everything, from the beginning I was an engineering manager (EM) in an Engineering department of ~20 people. The entire company employed ~60 people. We organized teams around product domains. Let’s call the teams, Alfa, Bravo, Charlie, Delta, Echo, and Foxtrot. Team Charlie’s product domain was being deprioritized from the company’s roadmap. Team Charlie was down to 1 engineer (after one was let go and another left). Team Alfa’s EM wanted to transition back to an IC role. Team Alfa was down to 1 engineer (along with their tech lead EM) I was the EM for Teams Delta, Echo, and Foxtrot. This situation doesn’t look dramatic, but when looked at relative to the company size, it was. The company was very tight knit and operated in a way where people knew each other fairly well. An organizational change needed to show care for the people impacted while aligning itself to the company’s business direction. I was asked for opinion on where the people on teams Alfa and Charlie would end up if we did a reorg. Treat it like a systems problem I devised three scenarios where we moved the folks on Alfa and Charlie into the existing teams. The scenarios included the possibility of shuffling existing teammates on Delta and Echo (including myself). This also met combining product domains as well like merging the domains of Delta and Alfa. I mapped out who would be on team (including managers). For each person, I wrote a benefits, trade-offs, and mitigations matrix: Benefits: why was it good for this person to be on this team, from a person, a team, and a company standpoint. Trade-offs: why was it a bad idea for this person to be on this team, from a person, a team, and a company standpoint. Mitigations: how could we mitigate the trade-off or “bad” reason. I was able to do this only because I had built relationships with all my reports: knowing what they liked about their work, their teammates, what their personality types were, how painful certain types of conflict would be for each of them, and the overall team dynamic. Taking a systems-approach to this reorg required a people-centric understanding to inform the process. These scenarios or mappings were presented and reviewed by our team of EMs and our manager, the VP of Engineering (VPE). What happened? How did it go? The mapping gave us a way to sensibly discuss the people shuffle. It allowed us to focus on each person individually and provide clarify on how the shuffle would affect the department with respect to the company’s business direction. Ultimately, our VPE was responsible for making the decision on which team setup to execute. After discussions with company leadership, we talked to each individual affected by the shuffle prior to announcing it. This gave them time to process the upcoming change without feeling blindsided. It also gave us one final set of data points about whether our decision was going to immediately fail. The reorg was announced at a company all-hands and accompanied by a document outlining our rationale, the team shuffle, and the timeline for transition. We also gave space for folks to ask questions (either in the doc or privately). The reorg was generally well-received as far reorgs go. Folks appreciated have the rationale explained to them and the approach we took to get there. The mapping used to discuss the people shuffle was never shared with anyone outside of the leadership team. The following is an example of how I outlined each shuffle scenario. I used the mind map template from Miro. Mind map of a team shuffle organized by benefits/trade-offs/mitigations for each person on the team.