Engineering Archives | Dyspatch Interactive Email Builder and Visual Editor Thu, 17 Apr 2025 16:23:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.dyspatch.io/wp-content/uploads/2022/03/cropped-Dyspatch-favicon-512-32x32.png Engineering Archives | Dyspatch 32 32 Seamless & Secure: A Guide to SSO in Sendwithus https://www.dyspatch.io/blog/seamless-secure-a-guide-to-sso-in-sendwithus/ Thu, 17 Apr 2025 16:23:31 +0000 https://www.dyspatch.io/?p=121730 What is SSO? Single sign-on (SSO) is an account security feature that allows customers to mandate sign-in requirements and member access to their systems through a single identity provider. When configured, instead of creating a new username and password for each individual platform, your team members will use login information from your identity provider to […]

The post Seamless & Secure: A Guide to SSO in Sendwithus appeared first on Dyspatch.

]]>

The post Seamless & Secure: A Guide to SSO in Sendwithus appeared first on Dyspatch.

]]>
AMP for Email: Making a Blackjack Game for the Inbox https://www.dyspatch.io/blog/amp-for-email-blackjack-game/ Mon, 12 Sep 2022 18:15:06 +0000 https://www.dyspatch.io/?p=17049 Before I began working at Dyspatch, I had no idea that using AMP for an App inside of an Email was even possible. For the better part of three decades, it really wasn’t. However, when I learned about AMP for Email and how Dyspatch could help me develop with it, I became excited. Like real […]

The post AMP for Email: Making a Blackjack Game for the Inbox appeared first on Dyspatch.

]]>
Before I began working at Dyspatch, I had no idea that using AMP for an App inside of an Email was even possible. For the better part of three decades, it really wasn’t.

However, when I learned about AMP for Email and how Dyspatch could help me develop with it, I became excited. Like real excited.

AMP for Email - Dynamic Email

My journey into becoming a developer started with the love of one thing: video games. I played them all the time, and one day I decided that I wanted to learn how to make one. This is how I discovered coding and found a passion for it.

So it’s really no surprise that my first immediate thought about AMP for Email and Dyspatch was: “I wonder if I can make a game with this?”

After some reading, learning the ins-and-outs, and a lot of experimenting, the answer was clear: Yes, you totally can! And in this post I’m going to walk you through how one might go about doing just that.

And considering gamification is one of the best ways to increase marketing engagement, you can consider this some light pro-D research 😉

While I have made a few working games with AMP for Email that work inside my Gmail inbox (it’s kinda my thing,) one that stands out to me is my take on the classic card game, Blackjack. I lovingly refer to this game as AMPJack.

Send yourself a sample AMP email:

Poll in Email

AMP for Email - Interactive survey app

Increase response rates and serve up engaging inbox experiences with polls in email. Submit your email below to send yourself a sample AMP email.


The Setup for our AMP for Email Game

The rules to AMPJack are fairly simple:

  • A player and a dealer both get dealt two cards each, the player can see both of their own cards, but only one of the dealer’s cards.
  • A player can ask for more cards, one at a time, by “hitting” or if they think they have enough to beat the dealer they can “stay”.
  • The goal is to try and have a hand that totals as close to 21 without going over. If either dealer or player go over 21, they “bust” and lose. If neither go over, then it’s whomever has the highest value hand of the two.

Blackjack!

There are other things that you can do in Blackjack, such as splitting and betting. But I elected to try and keep things simple as I wasn’t entirely sure how to pull this off at first!

Armed with a plan of how a basic AMPJack game will play out, I began thinking about what I needed to do:

  1. I would need to display and update the Dealer’s cards
  2. I would need to display and update the Player’s cards
  3. I would need buttons for “hitting“, “staying” and I also wanted to add a “new game” option
  4. In addition to displaying cards, I would need to be able to display and update the total of all cards in each hand
  5. I also wanted to display a message based on the status of the game, like if the player wins or loses.

The next question was… how can I do all of this in an email without being able to use my trusty friend, Javascript?

AMP for Email adds a lot of functionality to emails, but one of the coolest things it does is give you the ability to receive and send data via GET and POST requests right from your inbox! You can then use those requests inside of an Email and update the content dynamically.

In an “aha!” sort of moment, I came up with the idea that I could track both dealer and player cards on a server, make logic decisions based on end points coming from the client, and serve the updated values via an API back to the client to update the game.

Let’s Talk About the Server

The first iteration of this server used Node/Express, but I then discovered Fastapi, which uses Python, and was perfect for what I was trying to accomplish. As for hosting, I went with Heroku since it gave me the ability to containerize it all in a Docker container and have Heroku handle the rest.

There are 6 endpoints that AMPJack hits on the API:

  1. /playerCards,
  2. /dealerCards,
  3. /gameStatus,
  4. /hit,
  5. /stay,
  6. /reset.

While I could have just used one end point for everything, I wanted to segregate everything into its own areas of concern so that I could debug easier if something were to go wrong with any of the endpoints.

/playerCards

A GET request that returns an array of objects representing each card currently in the player’s hand. It also returns the total value of the cards together, and each object has an image property to make sure the player sees the correct card.

/dealerCards

This is the same as /playerCards except it handles the dealer’s hand, and will initially only return one of the cards as the other is face down until a player “stays”.

/gameStatus

This returns a string based on what stage of the game we are in. If a player wins, loses or busts it returns a phrase indicating what has happened. If the game is still being played it returns an empty string.

/hit

This is a POST request that adds one more card to the Player’s Hand.

/stay

This is another POST request that triggers an action to show the Dealer’s entire hand and continuously adds cards to the dealer’s hand until it either reaches 17 or goes over 21.

/reset

A GET request that resets the Player’s Hand, Dealer’s Hand, game status, shuffles the deck and deals again.

AMP for Email game: AMPJack (Blackjack)

With the API all set up and ready to go, it was time to put this into the Dyspatch drag-and-drop editor, and use the power of AMP for Email to make a fun and interactive email!

Making a GET Request

Dyspatch comes with its own markup language—similar to HTML—called DML.

Using different components you can quickly put together beautiful looking Emails that will be translated into Email HTML (compatible with any email client) when you’re ready to export. Some of the components in DML are especially made for working with AMP for Email. Making a GET request and displaying the returned data requires a component called <dys-list>.

<dys-list> expects to have a src, in this case our API endpoint, as well as a height and width. We specify how much space in our Email the list will take up. A safe Email width is 600px wide and I chose 150px for the height so it has enough space to show an entire card with a little wiggle room. I have also given it an id of “playersHand” as we’ll need this as a reference point later on in our program.

AMP for Email lists expect a JSON response starting with an array of “items”. So any response that you want to use inside of your Email should look similar to this:

{
items: [
{
content: “Content”
}
]
}

In the case of the in-game screenshot above, the response would look like this:

It’s important to know that not all Email clients support AMP for Email. This is why with <dys-list> we need to include two other components, <dys-dynamic> and <dys-static>.

<dys-dynamic> is what displays our dynamic content if AMP for Email is supported, but it’s important that we use a “static fallback” so that users receive some sort of message saying that the Email needs to be viewed in a client that supports AMP for Email.

An alternative here would be to add a link or a button that takes the user to the web where a similar functioning app lives, but as my intent with this project was to focus on the Email aspect of this app, a simple text message should suffice.

Now For the Dynamic Content!

To update an HTML email without the use of a fancy framework like React, we need to use a templating language. Lucky for us Dyspatch/AMP for Email has support for using and displaying content with the Mustache templating language. We have to place this inside of <dys-template> so it knows to render the content correctly.

What this does is allow us to use the Mustache templating language to show dynamic content. Since we know <dys-list> expects an array of “items” we don’t have to include that in our code.

Mustache is a logicless templating language, which means we can’t use a typical loop, or IF statements. We need to be able to loop through our playersHand array so that we can get all the information out of it. Luckily, Mustache does have a way to loop through arrays by placing our code in between {{#nameOfArray}} and {{/nameOfArray}}.

Knowing that, we can see that the above bit of code is actually looping through the “playersHand” array that is returned from the API. Specifically we are pulling out the “img” property from each object inside the playersHand array (src = “{{img}}”). This is how the proper cards are displayed in the browser.

So altogether what this is doing is requesting JSON from the /playersHand endpoint from our server. It expects an array called “items” that includes another array called “playersHand” that it can then loop through and specifically pull out the img property from all objects in the “playersHand” array.

Groovy.

Making an AMP for Email POST Request

Another aspect that I needed to think about was how I would go about updating the hands of both player and dealer whenever a player hits or stays. Luckily AMP for Email has a way I can do just that.

When the email is first loaded, <dys-list> immediately makes the GET request to receive the initial state of the Player’s Hand. As I mentioned earlier, <dys-list> has an “id” property that I’ve given the value “playersHand”.

This allows me to use AMP for Email to specifically select it for an action based on an event.

The event in this case would be if the user clicks on the “Hit” button. I elected to put this inside a form because it’s essentially sending a signal to the server as a POST request that the user has clicked on the “hit button”.

Inside of that <dys-form> you’ll notice an attribute called “on=”, this is an AMP for Email attribute.

AMP for Email uses Events and Actions, which you can read more about here.

One of the events the “on” attribute can listen for is a successful form submit. When a user clicks “hit”, the server adds another card to the playersHand array and responds that it successfully received and processed the request. AMP for Email then takes an action: “on=’submit-success: playersHand.refresh

What this is saying is that “on a successful submit, refresh the list with the id playersHand.” This is how we trigger the GET request again with the updated information so that the current and proper hand is being displayed.

With AMP for Email, we can also take multiple actions based on one event. This is really important for our AMPJack game because not only are we adding new cards when a player clicks on “hit”, we’re also checking if a player’s total hand goes over 21. If, while hitting, a player goes over 21 they automatically lose and is considered a bust.

As I previously mentioned we have an endpoint that serves the current game status. This is also a <dys-list> that is hitting the “/gameStatus” endpoint.

Just like with the player’s hand, we receive a JSON response and pull out the “gameStatus” property from the “items” array that is being returned. It also has an id of “gameStatus” so we can reference it when we want to take an action with it.

The way we create multiple actions based on one event, is by separating them by commas.

Now we are not only refreshing the playersHand on a successful form submit, we are also refreshing the gameStatus. If the player goes over 21 the gameStatus list is refreshed and displays a message saying the player has lost.

We know how to set up the player’s hands and we have a space in our email that is listening to and can display the game status, but what about the dealer’s hand?

With the player’s hand setup, it’s basically the same, only instead of pointing at the playersHand endpoint, we point it to the dealersHand endpoint. We would also give it an id, so that when a player clicks on the “stay” button, we handle it like we would a hit. Our client sends a POST request to the “stay” endpoint, and refreshes the dealersHand, which also updates the gameStatus list.

The AMP for Email part for setting up our game is complete 🥳

Polishing Up the Front End

The final step is to set up what our front end is going to look like. There are lots of options that you can do, and yet another great thing about AMP for Email is that it supports a lot more fun and useful CSS. In my final version I used Flexbox to make sure that the cards lined up the way I wanted. Definitely not something I could have done in normal email HTML!

CSS with AMP for Email

You can read about what CSS is supported by AMP for Email in their official docs.

Ultimately it’s up to you how you want your AMPJack App to look!

What Will You Do with AMP for Email?

AMP for Email really is a game changer when it comes to Email technology. I hope from reading this post you take some inspiration that with a little creative use of GET/POST requests, and the power of AMP for Email + Dyspatch, you can create some pretty engaging, fun and unique experiences for your customers right inside of their inbox!

If you do decide to create your own variation of AMPJack—or any cool AMP for Email experiences for the inbox—we’d surely love to see it! Send the Dyspatch team a copy to us@dyspatch.io!

The post AMP for Email: Making a Blackjack Game for the Inbox appeared first on Dyspatch.

]]>
Building AMP Emails Series: Dynamic Lists https://www.dyspatch.io/blog/building-amp-emails-series-dynamic-lists/ Tue, 01 Dec 2020 18:31:59 +0000 https://www.dyspatch.io/?p=4655 Overview In this article, we’ll look at the basic tools you will need to create and test an AMP email. More specifically, we’ll take a look at how we can use AMP to create an email with a live data feed in it. For example, a “weekly updates” email for our blog subscribers that uses […]

The post Building AMP Emails Series: Dynamic Lists appeared first on Dyspatch.

]]>

Overview

In this article, we’ll look at the basic tools you will need to create and test an AMP email. More specifically, we’ll take a look at how we can use AMP to create an email with a live data feed in it. For example, a “weekly updates” email for our blog subscribers that uses AMP to update itself with new articles that you post even after you send the email.

AMP is pretty cool. It changes the way businesses engage with their audience by allowing a whole new level of interactivity that goes beyond what is possible with traditional email. AMP is currently supported by gmail and a few other mail clients. Although this amounts to most users out there (gmail having the largest user base), not all providers will show an AMP version, which means that you will need to still create an HTML fallback version. But don’t be discouraged! You send both versions to all of your users. If a user opens the email using a client that supports AMP, they will see the AMP version. If their client doesn’t support AMP, they will see the HTML version. In this example, we will focus on how to make the AMP version, but it’s important to note that an HTML version is still required and should be held to a certain standard to prevent a bad user experience.

Below I describe in detail each step for creating the AMP email version. Follow this guide and you’ll be well on your way to fully functioning AMP emails.

  1. Setup AMP boilerplate
  2. Add an “amp-list” tag
  3. Define our list data source
  4. Define how the list items will look
  5. Add a little style
  6. Send a live test to our inbox

Send yourself a sample AMP email:

Poll in Email

AMP for Email - Interactive survey app

Increase response rates and serve up engaging inbox experiences with polls in email. Submit your email below to send yourself a sample AMP email.


Tools we will use

When developing AMP emails, there are two very useful tools that you can use: AMP Playground and Gmail AMP for Email Playground. AMP Playground is what you’ll want to use first. It’s basically an AMP email editor which will let you know if you make any mistakes. It will also give you a live preview of what your AMP email will look like while you work on it.

Once you’ve finished writing a rough draft using the AMP Playground, you’ll want to test it. That’s when the Gmail AMP for Email Playground comes in handy. It lets you send an AMP email to yourself so you can see what it really looks like in your inbox.

Step 1: Setup AMP boilerplate

Open the AMP Playground in your browser. You should see something like this:

Set up the amp playground boilerplate

The AMP Playground starts you off with the minimal boilerplate you need for any AMP email. Notice the green “VALID” pill near the top of the page? While you are developing your AMP email, pay attention to this. It will turn red if you make any mistakes that make your AMP email code invalid. If this happens, you can click on it to see an explanation of the errors. It’s important to ensure that you have no errors, otherwise your AMP version will not display when you send it.

Step 2: Add an amp-list tag

Alright, let’s start writing an AMP email. The first thing that we want to do is add a “dynamic list”. We can do that using the amp-list tag. Let’s add one. Add the following content to your email underneath the h1 header tag:

When you do this, the AMP Playground will add a script to your emailcontent. Notice the new script that has the attribute “custom-element” set to “amp-list”? The AMP Playground added it for you because it saw that you are using an `amp-list` tag.

Why does it do this? Every type of AMP tag has a corresponding script that you need to include. So when you want to use a particular AMP tag, make sure you have the script that adds support for it in the head of the email. The AMP Playground does this for you automatically, but if you edit your email somewhere else you’ll have to remember to do this yourself.

Your AMP email should now look like this

Your email should now look like a giant red block in AMP playground

Oof, that red isn’t great, but we’ll get to that later. For now, let’s dissect what we have so far. Notice the “width”, “height” and “layout” attributes. These are required and AMP won’t let you use a list without them. In fact, most AMP tags have this requirement. That’s because AMP wants to know how to layout your email. This is a bit strange if you’re used to “regular” HTML, where you don’t have to set these attributes and your browser will just figure things out.

AMP requires you to specify the sizes of most elements up front. That’s because AMP is designed to be performant and ensure that your email layout is consistent and doesn’t jump around when your data loads or changes. This puts the onus on you to think ahead about what size things should be. The benefit of this is that your email will load faster and will be more usable. You can read more about layout in AMP for Email here.

Step 3: Define our list data source

The next thing to notice about the `` tag is the `src` attribute. This tells the `amp-list` where the data it needs will come from. This URL can point to either a JSON file that is hosted online or to an API server. Currently it has the value “https://57d9ec0e220e62a1c06916ab6b3b1f71.m.pipedream.net.” This URL points to a mock API that has been setup for this example. About that -- if you want to make your own, there are two really important things to know:

1 Secure the URL with CORS
For security reasons, the “src” URL needs to include some HTTP headers called CORS headers. This is really important because it means you can’t use just any API. Most APIs won’t be set up to set the security headers that AMP expects. You can read more about what those headers are and how they work here.

2 Return an “items” list
The `amp-list` component expects the JSON to be formatted in a specific way. It needs to be a JSON object with a top level field called “items” that is a list, like the following:

You can put any valid JSON data in this list. It could be just a simple list of text or it could be a list of objects. The important part is naming that first part “items” so that the `amp-list` component can find the list of things you want. You can read more about that here, including how to change the name of “items” if you need to.

Step 4: Define how the list items will look

Okay, so we’ve gone over this new “amp-list” tag, but it’s not doing anything yet. We still have a big red box. We still need to add some more AMP to make things work. Specifically, we need to describe what each list item should look like. It’s kinda like we just wrote a `ul` or `ol` tag in HTML and we need to add some `li` tags for the things that go inside. But it’s a bit trickier for the `amp-list` than it is for an HTML list.

To get started, add the following AMP code into your list (in between the `amp-list` tags):

After doing that, you should now see something like this:

Setting up amp-list in AMP playground

Cool! The big red box is gone and we see a list of links. What’s going on here? 

First of all, we used a new AMP tag called `template`. You have to put this tag around all the markup for the items in your list. When you do this, AMP will make copies of everything in the `template` tag for each item in your list. But that wouldn’t be very useful if that’s all it did because you would just see the same content repeated over and over again. 

The other important thing to notice is the presence of `{{something}}` scattered around inside the markup. These are variables that get replaced by each item in the list. It’s pretty much the same concept as “merge variables” or “merge tags” which you may already be familiar with from email templating systems in services such as SparkPost or MailChimp. They are like sticky notes saying “replace me with data.” This data comes from the “src” URL that we set up earlier. 

This is what the list data JSON from the “src” URL looks like:

Notice the relationship between the data and the markup that we added? Specifically this part:

The variables in the markup here are data points like `{{title}}`, `{{summary}}` and `{{url}}` that line up with “title”, “summary” and “url” in the “items” JSON.

Step 5: Add a little style

Right now things look a bit plain. Styling AMP emails is very similar to styling HTML emails. You can write mostly the same CSS that works in a web page, with a few exceptions. It’s actually a little easier to write CSS for an AMP email than it is for a regular HTML email because AMP CSS support isn’t full of quirks from older, legacy email clients. Just like a web page, you can use inline styles and you can put styles in a `style` tag in the `head` tag at the top. There is one slight difference though. You can only add one style tag at the top of your email and it has to have the ‘amp-custom’ attribute on it.

Let’s make this example look a little better by adding some styles. Add the following into the `style amp-custom` tag at the top in the head of the email:

You should now have something that looks like this:

Styled amp-list in the AMP playground

Okay, definitely not an example of peak design. But hey, it works!

Step 6: Send a live test to our inbox

Almost done! This is looking good, but the AMP Playground just simulates how AMP works. It’s not perfect and it might appear a little differently from the live version. So before you can call this done, you need to use the Gmail AMP for Email Playground to send yourself a test. Let’s do that now.

Open the Gmail AMP for Email Playground in a new tab/window. It looks a lot like the AMP Playground. Copy all the AMP code you just wrote in the AMP Playground and paste it into the Gmail AMP for Email Playground. It should still say “Validation Status: PASS” at the bottom and you should see the same preview on the right that you saw in the AMP Playground.

Send a live test to your inbox

Next, click the “SEND” button at the bottom and open your Gmail inbox. You should see your preview email. The first time you do this, you might see a message telling you to enable dynamic emails in your Gmail settings. Follow the steps to turn that on in your settings and re-open the email. You should see the AMP content now and it should look like this:

Voila! It's the rendered AMP list!

Voila! Those are the basics of how to create and test a live data feed in an email using the list component of AMP for Email. Next you may want to explore the AMP for Email Documentation to learn some more about other AMP features. Once you’ve got the hang of things, you will need to register for sender distribution before you can start sending AMP to other people.

Hopefully this article makes it a bit easier to start building AMP emails, but it is by no means an easy task. Our email production platform, Dyspatch, helps to make building AMP emails even easier with pre-coded interactive email apps. If you want to give it a try, just sign up for the free trial and you’ll get access to all the features in the product, including our AMP starter theme that comes loaded with pre-coded AMP blocks.

The post Building AMP Emails Series: Dynamic Lists appeared first on Dyspatch.

]]>
Building a scalable GraphQL server, with lessons from OData https://www.dyspatch.io/blog/building-a-scalable-graphql-server-with-lessons-from-odata/ Thu, 24 Jan 2019 05:00:39 +0000 http://blog.dyspatch.io/?p=2306 GraphQL is awesome for clients. GraphQL offers a lot of power and flexibility to the API consumer. For example, the following query will fetch a list of users along with their id, email, and group: { users { id email group { id name } } } The API consumer loves this because: They get […]

The post Building a scalable GraphQL server, with lessons from OData appeared first on Dyspatch.

]]>
GraphQL is awesome for clients.

GraphQL offers a lot of power and flexibility to the API consumer. For example, the following query will fetch a list of users along with their id, email, and group:

{
    users {
        id
        email
        group {
            id
            name
        }
    }
}

The API consumer loves this because:

  • They get to fetch only the fields they actually use.
  • Future versions of the API that add new fields will not change the behaviour of this request.
  • They can choose which associated data to load along with the user (in this case group). There is no need to do a separate API call to fetch the group for each user, which would create an N+1 querying issue.

It’s definitely worth checking out the GraphQL website if you’re still not persuaded. There’s a lot to like.

Scalable server implementations are tricky

Let’s think about how we implement the users with groups example on the server. The standard way to implement GraphQL servers is to describe the API schema and then map each type to a resolver. This example needs a resolveUsers function and a resolveGroupById function. If the data is stored in a relational database, then groups will not be loaded unless they are actually used by the query. This means that the API call is very efficient when groups are not requested, but when groups are included there is one call to resolveGroupById for every user fetched. The SELECT N+1 issue still exists, it’s just been moved from the client to the server. That’s an improvement, but it doesn’t eliminate the problem.

Is SELECT N+1 even a problem?

It depends. In some cases, it might not cause problems but it’s not the best practice, and things can quickly get out of hand as query complexity increases. N+1 can turn into (N * M * L) + 1

Database perspective

With our naive implementation, the database will see calls like this (all in separate round trips):

SELECT … FROM users LIMIT $page_size
SELECT … FROM groups WHERE id = $group_id
SELECT … FROM groups WHERE id = $group_id
SELECT … FROM groups WHERE id = $group_id

Now let’s say that the requirements of our application demand that we scale beyond these restrictions. We can’t afford to use this naive implementation. First, we’ll look at how this problem is already solved in another API framework.

The OData solution

If this were a normal REST API, there would be an endpoint that looks like this:

GET /api/users_with_groups

and it would generate this SQL:

SELECT … FROM users\LEFT JOIN groups ON groups.id = users.group_id

LIMIT $page_size

This feels natural for someone used to working with relational databases, but it’s a bit difficult to reconcile with the flexible nature of GraphQL. Conceivably, it is possible to inspect the request, find out which tables need to be joined together, and generate an SQL query that selects the exact data required to fulfill that request. In fact, there is another API protocol, named OData, whose primary implementation does precisely this. Typically, the request would look something like:

GET /api/odata/users?$expand=group

Since OData is mostly used in the .NET ecosystem, if the API interface looks like this there’s a pretty good chance that the OData query is being translated into an IQueryable and then to a single SQL SELECT statement, with the necessary JOINs. My initial intuition was that this is the exact, ideal behaviour, but experience tells me otherwise. Of course, OData is just an API protocol. There’s no reason that OData must use dynamic joins but from my experience, every one does because that’s how the Microsoft library works.

Why the OData solution is not effective

The approach of essentially allowing API consumers to build their own SQL queries (with limitations) is riddled with problems:

  • It gets harder to control exactly what the SQL statements are going to look like and avoid malicious queries that kill the server.
  • Usually, page sizes and maximum entities in the $expand clause are the only way to limit query complexity.
  • SQL query plan is less likely to be cached because there are more unique queries.
  • Application layer caching is harder to implement and less effective.
  • Transitioning to a NoSQL database becomes very difficult because the relational database query interface has leaked through the API.

In short: consumers get too much control over the generated SQL.

An alternative, GraphQL-friendly solution

Rather than trying to “fix” our generated SQL, we can go in the opposite direction. Instead of building complex queries with a lot of joins we’ll stick to very simple queries like these:

-- Get one thing by ID
SELECTFROM things WHERE id = $id

-- Get associated things
SELECT … FROM child_things WHERE parent_id = $thing_id LIMIT $page_size

-- Get a page of things
SELECT … FROM things WHERE created > $min_date ORDER BY created LIMIT $page_size

JOINs are not forbidden, they just need to be static. For example, things could actually be an aggregate that requires joining multiple tables together. The only dynamic parts are the actual variables ($id, $page_size, etc), and in some cases the contents of the WHERE and ORDER BY clauses to accommodate things like user-controlled sorting.

The benefit of keeping data fetch operations nice and simple is that there are a lot of optimization opportunities. Even though there are still N+1 calls to resolver functions, the database doesn’t need to be hit nearly as frequently.

Optimizations

Memoization

In Node.js this can be accomplished with the library dataloader. Memoization is a type of caching where a function’s return value is cached for a given set of inputs. This means that if there are many calls to resolveGroupById, it will only execute once per unique group ID and then reuse the value. The cached value only needs to live for the duration of the request (or one event loop iteration in JS) in order to be useful. In some use cases, this doesn’t help at all but in others, it massively reduces the number of database calls. Imagine loading 100 users with their group, but the only groups are “admins” and “users”. There would be one query to load the users and a maximum of two queries to load the groups because there is a lot of repetition.

Batching

This can also be accomplished with dataloader. In an N+1 database query situation, the N queries may be batchable. Instead of calling resolveGroupById once per group, call resolveManyGroupsById once with the list of unique IDs. The result could be many SELECT statements in a single round trip. It could also generate a single SELECT:

SELECT … FROM groups WHERE id IN ($id_1, $id_2, …)

Database caching

A simpler database interface, where data is only fetched by ID or listed with a simple filter, means fewer unique query plans. That also means query plans are more likely to be cached and less RAM is required to cache actual data in memory. Not all databases work the same, but in general, in-database caching features work better with fewer unique queries.

Application layer caching

This refers to a cache layer that sits somewhere in between the web API and the actual database.  It probably involves storing data in in-process memory, or in an external cache database like Redis or Memcached. With predictable queries, it is easier to define aggregates that can be fetched only by ID. Since there are likely a ton of GetThingById calls, all that’s needed is a key/value store, as long as there is a reasonable way to invalidate the cache when needed. Ideally, all writes are encapsulated by the API so that InvalidateThingById can be called whenever something changes.

Wrapping up

It turns out that the ostensibly naive approach of calling many resolvers for a single request actually scales pretty well, as long as you are willing to invest some effort into the data access layer that sits between the resolver functions and the database itself. The alternative solution of generating complex, dynamic SQL actually ends up putting complexity in the wrong place and creates more problems for an application that needs to scale well. The best part about this conclusion is that it makes total sense to start out with the simplest possible implementation and then add in the optimizations only if, or when, they are needed, without undoing any previous work.

The post Building a scalable GraphQL server, with lessons from OData appeared first on Dyspatch.

]]>
Sendwithus and Asana equals Swusana https://www.dyspatch.io/blog/sendwithus-asana-swusana/ Fri, 14 Sep 2018 05:00:08 +0000 http://blog.dyspatch.io/?p=2019 At Dyspatch, we love Asana and use it to track all tasks and projects across the company. That said, there are a couple of areas that we thought could use some polish. This blog post talks about a tool we created to help make Asana more effective for us, called Swusana. Swusana is an open-source […]

The post Sendwithus and Asana equals Swusana appeared first on Dyspatch.

]]>
At Dyspatch, we love Asana and use it to track all tasks and projects across the company. That said, there are a couple of areas that we thought could use some polish. This blog post talks about a tool we created to help make Asana more effective for us, called Swusana.

Swusana is an open-source script that adds two buttons to the top navigation bar of Asana, allowing you to toggle the following functionality:

  • Noise reduction: Hide noise in task comments and non-coloured tags in list views
  • Automatic no-follow: A blackout button that prevents you from being added as a follower to any tasks you view/modify while it is activated

Noise Reduction

A great thing about Asana is that it audits every action taken on a ticket so the full history of that ticket is logged; who made a change, when they made it, and what they changed. While this is a useful feature, most of the time you really only care about the human-generated comments on a particular ticket. In worst-case examples, there can be a whole page of actions to get through before you can see the next human-entered comment. Observe the following real-life ticket, in which a story has gone through our agile process in Asana.

Before Swusana:
Before Swusana

After Swusana:
After Swusana

As you can see in the above image, Swusana instantly removed the noise in the notifications section, highlighting the comments that were truly important.  

To turn this feature on, simply click the little bullhorn button in the top bar in Asana:

Turn on quiet mode

Turn on quiet mode

Automatic No-Follow

Another great feature of Asana is the Inbox. This is where you’re notified when any tickets that you’ve expressed an interest in have been modified. It’s a great, central place to access all of the information that you need to see. However, the way Asana determines your level of interest in a ticket is a little excessive. If you merely comment on a ticket, it will appear in your inbox every time it’s updated. This can become overwhelming, especially if you’re updating tickets during a meeting since you’ll get a notification about every change to every ticket you touch. Forever.

Overwhelming inbox

Overwhelming Inbox

The solution is to enter blackout mode before the meeting – or before updating any ticket you’d rather not receive future notifications about. Blackout mode ensures you will not be made a ‘follower’ on any ticket you touch, no matter what you do to it (not even if you click the ‘follow’ button). To turn blackout mode on, click the ‘person-no-entry’ icon, then click ‘OK’ on the warning dialog box.

Turn on blackout mode

Turn on blackout mode

You will no longer be made a ‘follower’ on any Asana task you view, comment on, move, or modify in any way. Since this solution can in itself be a bit excessive, as long as you’re in blackout mode the button flashes red to notify you. Simply click the button again to turn off blackout mode.

Blackout mode flashing warning

Blackout mode flashing warning

If you’re interested in using Swusana, installation instructions for Chrome can be found on  GitHub.

The post Sendwithus and Asana equals Swusana appeared first on Dyspatch.

]]>
QA Testing a Product with Cypress https://www.dyspatch.io/blog/qa-testing-a-product-with-cypress/ Tue, 03 Jul 2018 08:00:05 +0000 http://blog.dyspatch.io/?p=1889 Do you feel confident your team is shipping code that won’t break production? Building a product is tough: ensuring the promised value of your product matches the value that is delivered can be difficult. Deploying possibly untested code to your application has the potential to break features in unanticipated, and often unseen, ways. Fortunately, your […]

The post QA Testing a Product with Cypress appeared first on Dyspatch.

]]>
Do you feel confident your team is shipping code that won’t break production?

Building a product is tough: ensuring the promised value of your product matches the value that is delivered can be difficult. Deploying possibly untested code to your application has the potential to break features in unanticipated, and often unseen, ways. Fortunately, your teams can combat this cycle by implementing different forms of manual and automated testing.

At Dyspatch, we use a testing tool called Cypress to validate the quality of our software code, ensuring the product experience is maintained while we continuously ship and deploy code to production daily.

The Value of Code and UI Testing

Testing a product and putting it through its paces is incredibly valuable to both the team and the organization. It provides reassurance that the developers you trust to build the software aren’t delivering breaking changes in ways that can impact your customers.

Testing your code:

  • Saves time in the long run, for you, your team, and the company
  • Is industry best-practice — the majority of professional developers test their code
  • Reduces the potential side effects a new feature can have on the rest of your codebase
  • Mitigates regressions between deployments of your software and products
  • Is, most of all, fun!

As a developer, testing the code you write is strongly encouraged, and making sure you don’t break said code when you deliver updates in the future is paramount. Different forms of testing add various levels of both cost and value to a company, and it’s always wise to choose the level that suits your team and the needs of the product. Unit testing is the most basic form and covers the very fine-grained bits of your codebase. Integration tests combine pieces of your application and test them in slightly larger chunks. End-to-end tests are added as a final step to cap off the entire test suite and add value by testing the application from front to back, testing the whole picture.

The Testing Pyramid

Manual user interface (UI) testing can be arduous and cumbersome, however, and very expensive in terms of financial cost and people hours. For these reasons, many teams don’t see the need to have many end-to-end tests, if any at all, in their development cycle. Historically, UI testing has been difficult to automate consistently and accurately and is very slow in comparison to other forms of testing, such as unit and integration tests. But in the last 10 years or so, tools have emerged from the open source community to help alleviate this pain, and the rise of web application languages has made it even easier to get up and running with reliable, automated end-to-end (E2E) tests.

There are many tools out there, but none with the representation in the testing community like Selenium. Selenium is the de facto tool for automated UI testing, has been around for over 10 years, and has an impressively large community surrounding it. You can automate different UI tests, such as form submissions and ensuring interactive elements on a web page end up in the correct state. The problem with automation tools like this, however, is that they don’t have any knowledge of the state of your application while running in a web browser. That said, there are hacks to get this functionality, at least partially, but they’re not very user-friendly.

The Shining Beacon Of Hope

Selenium is beginning to show its age in 2018, and anyone in the web development community who’s used it for a period of time will tell you it’s hard to learn to love it. A good reason it’s falling out of favour with developers new to testing web applications is that it isn’t as web-friendly as more modern testing utilities. There are newer and more directly-scoped unit and integration testing tools that have improved in the areas where Javascript developers tend to spend their time. This is exactly where Cypress starts to fill the gap for E2E testing; it’s a modern tool, written for web developers by web developers.

There are a few key areas that differentiate Cypress from Selenium, one of which is the fact that Cypress is built to be integrated into the development cycle while Selenium is a standalone testing tool. Another is that Cypress tests are written in web-native languages (anything that can transpile to Javascript) and the test runner executes your tests directly inside the browser, not from the outside using a remote protocol. This also gives your tests the ability to have automatic DOM-element retries while your tests run and there is no need to add explicit waiting or timeouts for elements to be visible in the browser window.

Because the actual test code is being executed inside of a browser, there no object serialization and no remote protocol to generate test flakiness — you have access to everything you would find in a web application environment. What this means for web application developers on your team is that they can participate in writing the Cypress tests in a familiar language, just like they would when writing unit and integration tests for their own code.

QA and UI testing at Dyspatch

At Dyspatch there is a dedicated QA Developer who maintains and iterates on the suite of Cypress UI tests and the infrastructure surrounding it. This leaves room for the developers actually building the application code to care about their own unit and integration tests, but also to write ‘happy path’ UI tests into the suite as well. Because Cypress UI tests live alongside the code they’re testing, it becomes very straightforward for developers to maintain those tests while also feeling confident updating features in the code.

Here’s an example of Cypress running a very simple test, with a setup and a few assertions. It’s that simple!

If you have a Node.js environment in your project, with npm available, you can install Cypress to your dev-dependencies by going to your terminal and typing npm install -d cypress. After installation, opening the Cypress test runner can be invoked with ./node_modules/.bin/cypress open, again from your terminal.

Creating your first test spec is equally as simple as the two steps above. Go to your terminal again and type touch cypress/integration/first_test_spec.js, then navigate to the cypress/integration folder in your file explore or terminal. Open the first_test_spec.js file you just created in your text editor of choice.

With this newly created test spec file, you can run the simple test below as-is:

The test reads:

  • Describe the behaviour of the test
  • Describe what the test should be doing
  • The test content:
    • Visits a URL and searches for an element on the page containing the word ‘type,’ then clicks on it (this is likely a button)
    • Asserts that the URL of the current page includes a predetermined URL segment
    • Gets the element on the page with the class name ‘action-email’ (likely an input), types the value ‘fake@email.com‘ into the input field
    • Asserts that the typed value has been bound to the input value attribute

Tests such as this are easy to write and reason about, and even though it may seem too simple compared to some unit and integration tests, that’s the point; E2E tests should never test too much and should have a fairly light footprint relative to your overall testing stack.

As of this writing, Dyspatch, our Enterprise email template creation product, has over 70 happy-path and UI tests automated and running against a production-like environment. The tests notify the development team and set off fire alarms (not literally) when they fail. This lets the rest of our organization know that we care about the quality of the application experience and that they can rest assured customers won’t be affected when we deploy new features. These tests execute within a heavily monitored, several-minute boundary, and when the average time starts to creep up, we investigate ways to improve the run-time, so as not to block application developers from deploying.

Adding Cypress to our continuous integration checks on code check-in’s has not only become an invaluable piece of tooling in our development team’s quiver, but it has also even prevented production-breaking bugs from bringing down our application. Because of this, we’re investing more time into finding ways we can introduce Cypress to other levels of testing in our application stack.

The post QA Testing a Product with Cypress appeared first on Dyspatch.

]]>
Shifting from Monolith to Microservice https://www.dyspatch.io/blog/shifting-from-monolith-to-microservice/ Wed, 23 May 2018 06:00:21 +0000 http://blog.dyspatch.io/?p=1803 I am proud of the codebase we have at Dyspatch today, and of the road we took to get here. As we rewrite components as microservices, I’ve had time to reflect on that road. My goal with this post is to document our path, thought processes, and goals, in the hope it may help others. […]

The post Shifting from Monolith to Microservice appeared first on Dyspatch.

]]>
I am proud of the codebase we have at Dyspatch today, and of the road we took to get here. As we rewrite components as microservices, I’ve had time to reflect on that road. My goal with this post is to document our path, thought processes, and goals, in the hope it may help others.

Our code started out as a monolithic django app, running in Heroku with a single Postgres database backing it. This helped with making the product quick-to-market. Once it was out there, we iterated, adding features and evolving the code base. The dataset grew, as did the features, but we tried to keep them maintainable by building components to be: Reentrant, horizontally scalable, and as modular as possible without introducing excess complexity. Reentrant code is the first step toward autoscaling resources – if the code can run in parallel with itself, it can be scaled horizontally. We did this via the dyno API for Heroku, with the work coming in via SQS queues. Backlog in the queue? Add more machines. Queue empty? Reduce machines. Don’t thrash. Done.

This supported us well for several years. As these features grew in numbers, it started to become unwieldy. We started to have concerns with client load time, performance issues with Python, and requirements for specialized infrastructure for specific components. The design pattern we were using wouldn’t support us forever, so we needed to shift. The decision was made to shift to microservices over an extended time period, while building new features. We would build new features/components as microservices when and where it made sense, and when refactoring, we would take existing features/components out of the monolith, and replace them with specialized microservices.

Addressing the client load time, we switched from server-side rendering to a single page app. This allowed us to isolate the css/html/js resources and put them behind a CDN. An advantage to this was the isolation of a concern for the frontend team – a single repo to represent the UI, that could be deployed separately.

The next issue to tackle was a new feature, incoming webhooks. This would receive a lot of traffic. 95% of this traffic is not useful and would need to be thrown away, but that last 5% adds a lot of value if we can show analytics. Writing this in Python wouldn’t make sense, since we’d need a lot more machines to process that much data. Golang seemed like a sensible language for the job, since it is very good for concurrency. The code was simple, it filtered the data, and pushed the desired 5% of data into an SQS queue, for the monolith to consume and process at its leisure. This was our first true microservice. It had a simple interface to provide the limited amount of data needed for the service to know how to filter requests coming in, and would pass the desired data through.

Over the next year, we had two specialized datastore requirements. The first was high sequential writes, the second, mainly updating random access. Both these encompassed their own isolated types of data, so they were built out as microservices that owned the storage, processing, and monitoring of their data. With the high number of calls to each of these services, we needed a sensible communication layer, preferably one that could clearly define the contract between the services. We chose to use GRPC defined by protobuf files. Each client service pulls a protofile and can build the latest client. The protofile lives in the service, which guarantees backwards compatibility to any version currently in use. GRPC provides some extra speed in comparison to our restful alternative, since the SSL handshakes don’t happen every call, and a bi-direction stream is opened up, where each message is serialized to a binary format before being sent across the wire.

At this point, we are building major new features in their own services, and we’ve gone on to replace several large components of our code base, isolating the concerns for that portion of data and processing to its own microservice. But we still have more to do. The point where we have removed all data and processing layers, and only the business logic remains in the view functions, is on the horizon. I believe the next steps will involve a slow shift into a graphQL service, migrating each frontend component, one at a time, to depend on the graphQL data. Then once all systems have shifted over, the dream will be fulfilled: finally retiring the monolith. This is, I believe, an attainable goal.

The post Shifting from Monolith to Microservice appeared first on Dyspatch.

]]>
How to improve your API without causing issues for your customers https://www.dyspatch.io/blog/how-to-improve-your-api/ Tue, 15 May 2018 06:00:51 +0000 http://blog.dyspatch.io/?p=1823 Have you ever written software that was perfect after release? Did you ship it to production and never push an update again? I haven’t. And APIs are no exception. After you publish your API, you’ll probably receive a flood of requests to better support different use cases. Customers will ask for more fields in your […]

The post How to improve your API without causing issues for your customers appeared first on Dyspatch.

]]>
Have you ever written software that was perfect after release? Did you ship it to production and never push an update again? I haven’t. And APIs are no exception.

After you publish your API, you’ll probably receive a flood of requests to better support different use cases. Customers will ask for more fields in your responses, extra query parameters to filter results, methods that provide all the data they need in one request, etc. You’ll end up with two camps of users: those who don’t want your API to change and those who do.

Catch 22? Not necessarily. What if you could add as many new features to your API as you want without changing anything for existing users? It is possible. You can have your cake and eat it, too!

There are some great examples of APIs with powerful versioning mechanisms and one I find particularly impressive is Stripe’s. In fact, even though we planned to do it anyway, a Stripe engineering blog post inspired both our API versioning project and this post. I definitely recommend it for the great job it does explaining both the reasoning behind including versioning in your API and the value it brings.

I often see questions in places like StackOverflow along the lines of, “How do I build an API versioning system like Stripe’s?” Well, I did just that. I built an API versioning system like Stripe’s, in a Go backend, and then saw a post on StackOverflow where someone specifically asked how to do that.

I think it’s probably a little harder in Go, or other similar languages, because the examples given by Stripe use Ruby and rely on language features unavailable in Go. So this leaves the question: how to actually implement versioning like theirs using a language other than Ruby, and more specifically, using Go?

I have a Java background and I’m relatively new to Go, so I’ll admit right up front that I may have come at some of this a bit sideways. But regardless of how I got there, it works. And my Go-loving co-workers approve of the result.

When I started, I had to ask myself a few questions:

What will require a new version and how frequently will my API need to change?

  • Will I need a new version for all changes or only for backwards-incompatible changes?
  • Multiple releases per day?

What pieces do I want versioned?

  • Headers?
  • Response body?
  • Request parameters?
  • Request body?

In my mind, only the response body and possibly the headers need to be versioned. Your server will be wonderful and gracefully handle different request versions that use the same method. And of course, you have automated tests that regularly validate how each version is handled, right? So, that just leaves response data to be versioned. This is where your users may have code that is closely tied to the exact data in your response. Their code may not be able to handle a new or deleted data field or any number of other possible changes you might make.

A good versioning system should be able to shield users from pretty much any change. At Dyspatch, we decided that we wanted to version every piece of the response data. Any changes to the response body would be done in a new version with no impact on previous versions. We have a lot of automated testing in place for validation, which frees us up to change things regularly, to make incremental improvements and adapt to customer needs.

So how do we build an API with ‘versioning like Stripe’s’?

Well, there are a lot of ways it could be done but some depend heavily on cool language features that aren’t always available. What follows is how I did it in Go but it would probably work in a lot of other languages as well. If you work on a web API that serializes data into a format like JSON, you should be able to replicate what I did.

The basic idea is simple: encode everything you want versioned as a JSON object and then mutate that object using a series of migration functions until it matches the version your user wants.

Why use this approach? In Go, I don’t really have objects that can be mutated like they can be in Ruby. I have structs and structs don’t like having data types change or fields added/removed on the fly. I could use a map, but again, I run into issues if I have a version of the same field that needs to be a different data type. I could use empty interfaces and pointers but that’s a pretty low-level solution that’s just begging for bugs. So instead I use the JSON library I already use in the API to write JSON data. I can add and remove fields, as well as change field types. Perfect for mutating an object incrementally, which is exactly what I need to mutate my data from one version to the next!

How do you get the version from the user?

The first thing we need to do is find what version of the API is being called (we’ll call this the “target” version). There are a ton of ways this can be done, but probably one of the most idiomatic HTTP approaches is to use the `Accept` header. Be warned, though, it can be a bit tricky to parse the Accept header in a way that actually follows the RFC standard. I really banged my head against the wall trying to write a regex that could accommodate all the content-negotiation syntax that the header can include. If you don’t want to do this or don’t have a library available that does this for you, you should probably just use a custom header like `X-MyCoolApp-Version`. There are other approaches too, like using a request parameter or a path variable to send the version. Generally, it’s best to use a header unless you have a really good reason not to. Also consider what your response will be if no version is passed. You may want to return an error or default to the latest version.

Whatever approach you end up choosing, you will want some middleware that intercepts every API request and parses out the version value. You’ll want to validate this version and return an error in the event of an unrecognized version.

In Go, this means I have a version detection function that wraps an HTTP handler function. This is the version detection middleware. It looks for the version header, grabs the value and puts it in the request context. Now the downstream handlers are able to look into this context to check what version is requested later on.

return func(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
	// Check requested API version
	version, err := parseAcceptVersion(r.Header.Get(AcceptHeaderName), Versions)

	// Ensure valid version requested
	if err != nil {
		badParam(r.Context(), "version", "No valid version found in 'Accept' header.", w)
		return
	}

       // Set the detected version in our request context so we can use that later
	r = r.WithContext(context.WithValue(r.Context(), config.APIVersion, version))

       // invoke the next handler in the chain now that we have a validated version in the request context
	h(w, r, ps)
}

Once you detect the version, you can already make decisions based on that version. There may be some useful things to do here, but generally you won’t need it until you output your response. This is easy if you visualize your request as a pipeline with distinct stages: parsing, processing, and marshalling. The parsing stage is where we get the version. The processing is a stage that doesn’t matter to the versioning system. Think of it in abstract terms, a black box that takes in request parameters and spits out internal data. This is really key. Thinking of processing as an abstract black box makes the business logic simpler, something that can be changed without having to understand versioning at the same time. Trying to deal with both simultaneously can be a sure route into spaghetti-code hell.

The real versioning magic happens in the final, marshalling stage. This is where we render our internal data to JSON. We start by rendering the current version of the API. This is done in GO using structs called data transfer objects (DTOs) which have the same fields and types as the current (AKA newest) version of the API. For Go, we use an open source JSON library* to construct a JSON builder instance seeded with the data from the DTO. If this was the version that your user wanted, you’d be done and could just write the contents of the builder to the response body.

*Full disclosure: I maintain this JSON library as a side project

Great, now I have JSON of one version. What about the older versions?

Now we have a mutable JSON object that represents the response of the current version. But how do we get an older version? The short answer is this: for every API method, we will write functions that transform one version of the response into the version that came before it.

Chaining incremental migration functions together like this is useful because we can build a chain of responsibility that ends at the most recent version. Each time you build a new version, you simply write a function that takes your new version back to the previous one, which already has a function that links it back to the one that came before it, and so on through all previous versions. Having a chain of responsibility like this reduces the cognitive load on developers. They can focus on implementing one migration without having to worry about the rest. It doesn’t get more complicated as more versions are added.

So to actually realize this, I built a migration function that takes:

  • The current JSON builder instance (we mutate this during each migration)
  • The target version (we need to know what version we reach)
  • A map of versions and migration functions
  • The original (internal) data (we need to have the original data at hand in case we need to add in a new field or recover some other original data)

func ExampleAPIMethod(ctx context.Context, data *InteralData) ([]byte, error) {
    // convert DTO into a mutable JSON builder
    j := jsonbuilder.FromMarshaller(dto, util.SerializeJSON)

    // Build the version changes that apply to this method
    changes := map[string]util.Downgrade{

        // migration closure
        util.Versions["2018.02.09"]: func () {            
            // remove a field that didn't exist in this version from some paginated results
            d := j.Enter("data")
            for i := 0; i < size; i++ {
                d.Enter(i).Delete("newField")
            }
            return nil // no error
        },

        // migration closure
        util.Versions["2018.02.10"]: func () {
            // migration closure        
        },// etc...
    }

    // Run the migrations
    target := util.GetVersion(ctx)
    err := DowngradeDTO(target, changes)
    if err != nil {
	    return nil, errors.Wrap(err, "Unable to migrate example")
    }
 // Voila! Out comes JSON for your response at the version requested!
    return j.MarshalBytes(), nil // set this as your response body
}

Migrator code

// Downgrade functions will be closures over the data being migrated, so no args needed.
// Return an error in the event that the downgrade was unsuccessful. Do not continue.
type Downgrade func() error

// DowngradeDTO downgrades some DTO JSON to an earlier version
func DowngradeDTO(target string, versionChanges map[string]Downgrade) error {
	versions := make([]string, 0, len(versionChanges))
	migrationExists := false
	for k := range versionChanges {
		versions = append(versions, k)
		if target <= k {
			migrationExists = true
		}
	}

	// Ensure the version we are migrating to exists
	if !migrationExists {
		return errors.New(“Version does not exist”)
	}

	// Need to sort the versions by date so that we apply the migrations in the right order
	sort.Sort(sort.Reverse(sort.StringSlice(versions))) // lexicographical sorting happens to sort by date

	// We iterate over all version changes until we hit the version we are targeting.
	// Each time we hit a version that isn't our target, we apply all the changes and then repeat.
	// Repeat until we hit our target version
	for _, version := range versions {
		// When we find the target, we are done
		if version < target {
			return nil
		}

		// Apply the downgrade (it will mutate the data we are processing)	
               change := versionChanges[version]
		err := change()
		if err != nil {
			return errors.Wrap(err, "Encountered error while downgrading a DTO object.")
		}
	}

	return nil
}

This is where your choice of a versioning strings will really matter. You should choose a version scheme that allows you to sort your strings in an order that matches their release date. Why not use a release date? Something like the string “2018.04.03”, for example, would work well. But if you choose not to use release date, make very sure the string format you choose sorts in the same order the versions were released. You may also want to consider accepting version qualifiers like “alpha” and “beta” at the end of your strings. Or you could write a custom sorting algorithm.

The migration function starts by sorting the mutate functions by the versions they are keyed to. This puts our migration functions in a time order so we can migrate backward through time. The migrator loops through each version, checking it against the target version until we reach our target. Once we encounter a version older than the one we want, we know no more migrations are required.

One really nice side effect of using this approach is that it covers “gap” versions. A gap version is a valid version of the API that had no change for a particular API method, leaving a gap in the list of migration functions. The algorithm illustrated above will migrate a chain of functions ordered by date, handling gaps by simply skipping them or stopping before them.

Available migrations for method /example

"2018.03",
// gap
"2018.01",
"2017.12",
"2017.11",
// gap
“2017.09”,

Request version “2018.02” of /example

“2018.03” ← downgrade
“2017.10” ← skipped (gap)
“2017.09” ← downgrade
DONE

In this example, version “2018.02” is requested, but the requested method “/example” didn’t have any changes, so that version doesn’t appear in the list of available migrations. The migrator simply skips over the missing version to the closest older version. In the example above, this meant version “2018.01” was the final target. This is a little counter-intuitive until you realize that the newer version “2018.02” didn’t have any changes, so it should have the same response as the “2018.01” version. If we didn’t continue to the next older version, this would leave the response at the “2018.03” version, which is different.

Finally, I output the result data in the response. If you only versioned the body then you can just serialize the JSON builder’s contents to bytes for your response body. If you also included headers in your versioning scheme, then you probably want to encode the entire response like this:

{
    "headers": {
      "X-MyHeader-1": "some value",
      "X-MyHeader-2": "another value"
    },
    "body": {
      "dtoField1": "value",
      "dtoField2": "value"
    }
}

You could then render just the “body” portion of the JSON builder to bytes in your response body and loop over each header in the “headers” portion of the builder, setting the corresponding headers in your response to those values. The migration functions can manipulate these headers, adding, removing and changing their values just like they can for the response body contents.

The magic sauce for building our API versioning system in Go is the JSON builder library. It allows me to easily mutate data, including the types and structure of the data, to incrementally take the current version back to the one that came before, and that version to the one before it, until the user’s version is reached. This way, we can update our API as often as we want, with zero impact on users of older versions. A win-win for both us and our customers.

The post How to improve your API without causing issues for your customers appeared first on Dyspatch.

]]>
Dyspatch and Asana Making the Most of the Right Tool https://www.dyspatch.io/blog/dyspatch-sendwithus-and-asana/ https://www.dyspatch.io/blog/dyspatch-sendwithus-and-asana/#comments Tue, 20 Feb 2018 09:33:36 +0000 http://blog.dyspatch.io/?p=1678 At Dyspatch, we use Asana to track all tasks and projects within the company, from Sales to Marketing to Operations, including Product Development (but that’s a separate blog post). Asana allows us to funnel tasks to the right people, making sure no task is ever lost while ensuring cross-team collaboration and approval. General Company Structure […]

The post Dyspatch and Asana Making the Most of the Right Tool appeared first on Dyspatch.

]]>
At Dyspatch, we use Asana to track all tasks and projects within the company, from Sales to Marketing to Operations, including Product Development (but that’s a separate blog post). Asana allows us to funnel tasks to the right people, making sure no task is ever lost while ensuring cross-team collaboration and approval.

General Company Structure

At the time of this writing, I’ve been with the company for just a month, but the stark departure from management systems I’ve experienced in the past has been a refreshing revelation.

Dyspatch has a fairly typical structure for a startup: Marketing, Engineering, Sales, etc. The company employs roughly thirty people, with offices in Victoria, BC, and San Francisco, CA.

Sendwithus in Victoria and San Francisco.

This is how the team is split between Victoria and San Francisco, with one outlier in Chicago.

Because of this geographical split, we have rallied around Slack for chat, Zoom for conferencing, and, the subject of this post, Asana for project management and task tracking.  

There are a couple of unique things worth noting about Dyspatch that allow us to make the most of Asana:

  1. Every workday, Dyspatch has an all-hands meeting we call Stand-Up. Let me say that again: The entire company. Every. Single. Day. This may sound excessive, and past experience has shown me that getting executives to agree to a monthly company meeting can be difficult. But the meetings here are nearly always under ten minutes and they always start on time.  
  2. Everyone uses Asana, in every department, at every level, for everything we need to track. This allows us to work a bit of magic.

Dyspatch Hierarchical Stand-Up

In addition to the Company Stand-Up board in Asana (see Fig. 1, below), each department also has an individual Stand-Up. Items actionable by a specific department are discussed at the company level, then moved into the Stand-Up project for that department. If two departments need to take action on an item, it’s put into both Stand-Ups.  (“A task in two projects without cloning? Witchcraft!” – a Jira user with Stockholm syndrome.)

Screenshot of the Sendwithus Stand-Up Asana board.

Figure 1

Each of the Stand-Up boards mirrors the Company Board, which includes the following subsections:

  • Done – Task complete.
  • Testing / Review – A task that remains visible to the entire company, is mostly complete, but needs some kind of sign-off before being marked as ‘Done’.
  • Doing – A task that’s in-progress and that needs to remain visible.
  • On Hold – Something we want to postpone but that’s still critical enough to be seen every day.
  • Needs Discussion – This section is for new items that need to be presented to the entire company.  

Adding something to the Needs Discussion subsection guarantees it will be brought to the attention of the entire company and addressed the next business day. The person who submits the item is asked to talk about it during Stand-Up. It’s then assigned to a department, if necessary, and either kept at the company level for tracking or not.

The Engineering Stand-Up board (Fig. 2, below), is almost identical to the Company Stand-Up board, but with an additional section called ‘Tech Talk’, a place for recent learnings – cool stuff that the rest of the team would benefit from. Things that are brought up in the Company Stand-Up that the Engineering team is responsible for will move into the Engineering ‘Needs Discussion’ section and be processed in a similar way.

Engineering Team Stand-up Asana Board

Figure 2

Hierarchical Stand-Up Example

Here’s a great example of the power of this process (see Fig. 3, below):

  1. Sales received an outraged email from a customer stating that we were sending spam emails to their contact list. After confirming the spam didn’t come from us, our Sales team needed help diagnosing the problem.  
  2. Sales took the issue to their manager, who did some research on SPF and DKIM. The manager was unable to definitively identify the problem so they did the Dyspatch thing and…
  3. Put an item in the ‘Needs Discussion’ subsection for Stand-Up the following day.
  4. At the next Stand-Up, the item was briefly explained and the Engineering team agreed to take ownership of the issue, which was then moved to the Engineering board.
  5. In the Engineering Stand-Up, we created a team to investigate.  
  6. The next day, day 3 of the issue, the team decided that the solution was to update our SPF records to have stricter spoofing requirements. At the same time, however, we determined that this solution could potentially impact all systems that legitimately send emails as both Dyspatch and Sendwithus, i.e. Salesforce, Pardot, Google Drive etc.
  7. Again, we added an item to ‘Needs Discussion’ for the next Company Stand-Up, to discuss the risks associated with the proposed solution and decide, as a company, how to proceed.
  8. The item was addressed at the next Stand-Up and as a company, we decided to implement the solution and check all systems over the weekend, to minimize the impact if something went wrong.  
  9. Over the weekend, a multi-department task force checked all potentially-affected systems and, after everything was deemed okay, another item was placed into the Company Stand-Up to let everyone know all was well but the situation should be monitored.
  10. In the end, this complex, cross-departmental, high-blast-radius change was handled within standard operating procedures, with minimal impact to company velocity, and all within five business days.
Sendwithus Stand-up flow chart

Stand-up in action

Consider how a more traditional company might have handled the issue. Same complaint from the customer and same escalation to the manager. The manager still decides they need Engineering’s help but with no defined process to ask for that help, they go directly to the VP of Engineering. The VP Eng might say something like, “I’d love to help you out but your priority is not my priority. I don’t have resources right now.” So the Sales manager escalates the issue, continuing up the ladder until they reach someone with with both the will and the authority to tell the VP Eng that they must deal with the issue. This takes time, and erodes relationships.  

Ultimately, the solution would likely be the same but it would have taken much longer and resulted in weakened relationships between departments.

Our Stand-Up process provides a direct channel to engage the broadest audience and have actionable tasks decided upon and delegated within ten minutes, every day. This specific example involved four departments and required engaging the entire company three times. Anything less, on either axis, could have been disastrous.

My point is that this entire workflow is accomplished with vanilla, out-of-the box Asana. This hierarchical, iterative, cross-departmental ownership system is *easy* in Asana but in other tools, as experience has taught me, can be a nightmare.  

The post Dyspatch and Asana Making the Most of the Right Tool appeared first on Dyspatch.

]]>
https://www.dyspatch.io/blog/dyspatch-sendwithus-and-asana/feed/ 1