When Do I Send Emails?

Work and the holidays have been distracting me from blogging and fun side projects for a couple of months, so I'm easing back into it with a really quick and easy Gmail data-wrangling post.

When I first started playing around with my Gmail data, I mentioned that I wanted to get some of the stats that Xobni used to provide before they were swallowed by the Yahoo! black hole. A couple of the simpler stats to compile are "what days of the week do I send most of my email?" and "what time of day do I send the most emails?".

In order to do any time-based analysis on my emails, I'm going to need the dates and times they were sent. So I've taken the PowerShell script from a while back and made a slight modification; in the $props hash I'm adding a field called SentDate:

$props = @{
    Id = $mimeMessage.MessageId
    To = $mimeMessage.To.ToString()
    From = $mimeMessage.From.ToString()
    FromEmail = $fromEmail
    Subject = $mimeMessage.Subject
    Body = $bodyText
    Format = $actualFormat
    SentDate = $mimeMessage.Date.ToUniversalTime()
}

MimeKit provides the sent date of the email as a DateTimeOffset; to keep things consistent, I'm converting everything to UTC at this stage.

From there, I import the data into pandas as per usual and filter it down to just the emails sent by me:

import pandas as pd
import numpy as np
import humanfriendly
import matplotlib.pyplot as plt
plt.style.use('ggplot')

df = pd.read_csv('../times.csv', header = 0)

fromMe = df.query('FromEmail == "[my email]"')

It turns out that indexing your data by date/time in pandas is pretty easy; you just create a DateTimeIndex:

temp = pd.DatetimeIndex(fromMe['SentDate'], tz = 'UTC').tz_convert('America/Denver')

Here I'm telling pandas to create an index by the SentDate field, and that the field is already in UTC. Then I'm converting all of those dates and times to my local timezone so that the data makes sense from my local perspective. This mostly works, because I mostly live in the Mountain timezone. Some of my data will be a little skewed because of emails sent while traveling and a few months where I lived in the Eastern time zone, but not so much that I care. In a later post I might look at how this data changes over time, which is more interesting (I might even be able to identify when and where I was traveling based on that data).

But for now, let's break down the data in temp and shove it back into the original dataset:

fromMe['DayOfWeek'] = temp.dayofweek
fromMe['Hour'] = temp.hour
fromMe['Year'] = temp.year

Now for each email from me, I've got a column that tells me what hour of the day and day of the week it was sent. From there, aggregating it and charting it are a snap:

# Number of emails sent by day of week
sentDayOfWeek = fromMe.groupby(['DayOfWeek']).agg({'Id' : 'count'})
sentDayOfWeek['Id'].plot(kind='bar', figsize=(6, 6), title='Emails Sent By Day Of Week')
plt.show()

# Number of emails sent by hour of day
sentHourOfDay = fromMe.groupby(['Hour']).agg({'Id' : 'count'})
sentHourOfDay['Id'].plot(kind='bar', figsize=(6, 6), title='Emails Sent By Hour Of Day')
plt.show()

The data is about what I'd expect; more emails on Monday than any other day (0 == Monday on this chart) and the majority of emails sent during the workday (with a dip around lunch).

Aggregating by year provides a bit of mystery, though:

sentYear = fromMe.groupby(['Year']).agg({'Id' : 'count'})
sentYear['Id'].plot(kind='bar', figsize=(6, 6), title='Emails Sent By Year')
plt.show()
The numbers vary quite a bit more than I'd expect. 2004 makes sense; I only started using Gmail in July of that year. And the next couple of years shows me using Gmail more and more over my old Lycos account. The spike in 2011 also seems reasonable, as that's when I stopped working at an office with an Exchange server, so my day-to-day email load shifted. But the dips in 2012 and 2015? No idea. I'll have to dig further into those.

Creating A Fake Me From My Emails

Some twitter robot or another got me thinking about Markov Chains the other day (in the text generator sense), and it occurred to me that it shouldn't be too hard to create one which (poorly) simulates me.

Markov chains are basically a set of states and probabilities of moving from one state to another. If you build one out of a body of text, you can map the likelihood of a given set of words following another set of words. The upshot of this is that if you start from a random set of words and follow the map (choosing your next state at each node randomly in proportion to its likelihood from the initial text), you can end up with something that sounds (sort of) like it came from the original body text. It's a popular way to create twitter bots. Markov chains have other, much more practical uses, but I'm not concerned about them today.

I've got 10 years of my emails already sitting in a .csv file; step one was loading them up and discarding all the ones I didn't send. After that, most of the work was cleaning up the data - most of the stuff in the bodies of my emails is actually pretty useless for this purpose. I had to remove all the quotes parts of other people's messages, all the HTML messages (even when cleaned up, they polluted the Markov chain too much), 'Forwarded Message' sections, URLs, and my own signatures.

After passing all the emails through the removeJunk function below, I globbed all the texts together into one giant string and fed it into this Markov generator from Amanda Pickering. With that done, I could just call generate_words() over and over to see what kind of nonsense fake me would spew out.

So here's my final code for taking my emails and creating a fake me, Black Mirror-style:

import pandas as pd
import numpy as np
import re
from markovgenerator import MarkovGenerator

# Read in our email data file
df = pd.read_csv('../bodytext.csv', header = 0)

# Only use mail I sent 
emails = df.query('FromEmail == "[my email]"').copy()

# Blank out any missing body text
emails.Body.fillna(' ', inplace = True)

# Regexes for truncating messages
# If any of these are found, the rest of the message is stuff I didn't write
quoteHeaderRegex = re.compile('On.*?wrote:', re.DOTALL)
originalMessageRegex = re.compile('^\s?\-.*?(Original|Forwarded) Message.*?\-\s?$', re.MULTILINE | re.IGNORECASE)
htmlRegex = re.compile('^\<html\>', re.MULTILINE)
googleReaderRegex = re.compile('^E\.Z\. Hart - Google Reader', re.MULTILINE)

# Other things in emails that aren't relevant
# If these are found, replace them with empty string
fromAndToRegex = re.compile('^(from:|to:|sent:).*?$', re.MULTILINE | re.IGNORECASE)
sigRegex = re.compile('^\-[\-\s]{1,4}E\.Z\.', re.MULTILINE)
dividerRegex = re.compile('\-{3,}')
urlRegex = re.compile('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+')


def markovText(row):
    text = row['Body']

    if(row['Format'] == 'Html'):
        return ''

    return removeJunk(text)

def removeJunk(text):
    text = stripAfter(text, quoteHeaderRegex)
    text = stripAfter(text, originalMessageRegex)
    text = stripAfter(text, googleReaderRegex)
    text = stripAfter(text, htmlRegex)

    text = re.sub(fromAndToRegex, '', text)
    text = re.sub(sigRegex, '', text)
    text = re.sub(dividerRegex, '', text)
    text = re.sub(urlRegex, '', text)

    return text

def stripAfter(text, regex):
    target = regex.search(text)
    if(target):
        return text[:target.start()]
    return text

# Run all the emails through the cleanup function
emails['Markov'] = emails.apply(markovText, axis=1)

# Concatenate all the emails into one giant input string
input = emails['Markov'][:].str.cat()

markov_gen = MarkovGenerator(input, 200, 3)
markov_gen.generate_words()

And here are a few of my favorite phrases from the results:

"Use cheap rum. Cheap rum is going to get the crab wontons- otherwise I can't guarantee your safety:)"

"And more important than anything else, has been what has kept me employed and made me successful. Anyway, I'm glad you took your flashlight."

"We will begin working on the changes Tory has asked for, and I'll eventually start going full troll without her around:) Okay."

"You're receiving this email because you're rewriting 10,000 lines of code that solved the first two weeks of August while in between leases."

"At this point I'll need jumping and, ideally, that signs be installed? How would I go about making that request? Again, we completely understand if you don't have any SharePoint development experience, just experience as a user in each role just to make sure it was knitting and not crocheting- I don't know football that well, but any of them, they might actually turn into assets , though."

Have fun creating your own email doppelgängers, but remember - cheap rum is going to get the crab wontons. I can't guarantee your safety.

How Long Would It Take To Read All My Email?

This is part of a series on mining your own Gmail data.

For this post I want to tackle a fun question: how long would it take to read all of my email if that's all I did, 24/7? It's one of those questions that should interest anyone who's concerned about information overload or is looking to pare down their information consumption: "Just how much of my time is theoretically committed to my inbox?"

First, the obvious: nobody actually does this. No one actually reads every email they receive from start to finish (as anyone who's dealt with email in a corporate environment knows all too well). Most of us have filters (both electronic and mental) set up to glean the info we need and skip the rest.

And I'll bet that a lot of email is written without any expectation that the whole thing will be read; the author may be well aware that different portions of the email are relevant to different recipients, or that the email itself will only be interesting to a subset of the mailing list (e.g., many marketing emails).

So it's not the one super-relevant data point that should make people completely re-think their information consumption habits or anything like that. But it is fun to think about, and as one data point among many others it might prove interesting or useful.

On to the fun part - actually coming up with a number!

Like most people, I'm getting new emails all the time. So technically I should be taking into account all the new emails I receive while I'm still reading through my old ones. But that's hard, so I'm not going to bother. Instead, I'm just going to assume I've stopped getting emails at all while I'm reading. Which means that getting a basic number is easy - I just have to count all the words in all my emails, divide that by the number of words per minute I read, and I've got the number of minutes it would take to read everything.

The first thing I need to do is go back to my PowerShell script and pull in the body of each email. This is where we hit the first snag - HTML emails.

For doing word counts, I really don't want to look at HTML emails, because there's a ton of junk in there which a human won't be reading. Luckily, most email clients which send HTML emails also include a text version; in those cases, we'll just extract that text portion of the email and ignore the HTML. Unfortunately, this isn't always the case; when there's not a text version available, we'll just have to get the HTML and figure out how to deal with it later.

As usual, MimeKit will be doing most of the work. This version of the script is pretty similar to our previous ones, except that we have to loop through the possible body formats for each message to figure out which formats are available. We always check for the 'Text' format first, because that's the one we really want. If that's not available, we run through the others until we find one that works.

The relevant changes are the hash of the possible formats, which we use for iteration and for tracking the number of emails of each type:

$formats = @{
    [MimeKit.Text.TextFormat]::Text = 0; 
    [MimeKit.Text.TextFormat]::Flowed = 0;
    [MimeKit.Text.TextFormat]::Html = 0; 
    [MimeKit.Text.TextFormat]::Enriched = 0; 
    [MimeKit.Text.TextFormat]::RichText = 0; 
    [MimeKit.Text.TextFormat]::CompressedRichText = 0
}

And the section where we determine what the actual format is and store it:

    $bodyText = $null
    $actualFormat = $null

    # Run through all the enumeration values
    # The pipe through sort ensures that we check them in the enum order,
    # which is great because we prefer text over flowed over HTML, etc.
    $formats.Keys | sort | % { 
        # try each Format until we find one that works
        if($actualFormat -eq $null) { 
            # Try to get the body in the current format          
            $bodyText = $mimeMessage.GetTextBody($_)
            if($bodyText) {
                $actualFormat = $_
            } 
        }
    }

    if($actualFormat -eq $null) {
        $unknownFormat += 1;
        $actualFormat = "Unknown"
    } else {
        $formats[$actualFormat] += 1;
    }

You can find the full script here.

A couple of notes:

  1. This isn't perfect; sometimes MimeKit can't really figure out what the format is. For example, I have some Skype notification emails which MimeKit thinks are HTML only, but are in fact text. I'm not sure why MimeKit gets confused (probably incorrect headers in the original emails), but out of about 43,000 emails only a couple dozen seem to have issues, so I'm not going to worry about it.
  2. In all of my emails, the only two formats returned were Text and HTML. This might have something to do with what Gmail supports; I've seen some posts that suggest Gmail doesn't support Flowed, though those may be outdated. In any case, I'm only really dealing with Text and HTML in my word counts.

Once we've got the data, we can load it up in pandas and do some counting. Doing a naive count of the words in the plain text emails is trivial; we just define a method that uses Python's split method with None as the delimiter argument, and then look at the length of the returned list. Here's what textWordCount looks like:

def textWordCount(text):
    if not(isinstance(text, str)):
        return 0

    return len(text.split(None))

But the HTML emails are problematic because most of the content is markup that the user will never actually read. So we need to strip all that markup out and just count the words in the text portions of the HTML. To do that, we create another method which parses the HTML email content using the amazing Beautiful Soup library, strips away the style, script, head, and title parts, and extracts the text from what's left using get_text(). Once we've got the actual human-readable text, we can run it through our usual word counting method:

def htmlWordCount(text):
    if not(isinstance(text, str)):
        return 0

    soup = bsoup(text, 'html.parser')

    if soup is None:
        return 0

    stripped = soup.get_text(" ", strip=True)

    [s.extract() for s in soup(['style', 'script', 'head', 'title'])]

    stripped = soup.get_text(" ", strip=True)

    return textWordCount(stripped)

I took a couple of online tests to get an idea of how fast I read and came up with 350 words per minute. With that bit of data, we can now add some more columns to our data and figure out the total time to read all the emails:

def wordCount(row):

    if(row['Format'] == 'Html'):
        return htmlWordCount(row['Body'])

    return textWordCount(row['Body'])

averageWordsPerMinute = 350

# Count the words in each message body
emails['WordCount'] = emails.apply(wordCount, axis=1)
emails['MinutesToRead'] = emails['WordCount'] / averageWordsPerMinute

# Get total number of minutes required to read all these emails
totalMinutes = emails['MinutesToRead'].sum()

# And convert that to a more human-readable timespan
timeToRead = humanfriendly.format_timespan(totalMinutes * 60)

The full script is here, if you're playing at home.

Running that against all of my Gmail gives me:

>>> timeToRead
'2 weeks, 6 days and 18 hours'

So if I sat down and read at my fastest speed 24/7 for three weeks straight with no breaks, no sleep, and never slowing down, I could finish reading every word of every email I've ever received in my Gmail account. If I only read them 8 hours a day, it'd take me about 9 weeks to finish.

That's actually less than I expected, though "two whole months of your life spent just reading your email" is a still a bit sobering.

Sobering enough that I'm not going to try to compute this for my other four email accounts, anyway.

Mining Your Gmail Data - Part 6

First off, let's take a look at the second question that came up at the end of the last post: ignoring the Media Type (the 'application/', 'video/', etc.) from the MIME type.

That turns out to be pretty easy - the script from last time already collected that data, because MimeKit already made it available. We just need to adjust our pandas script to group on 'MediaSubtype' instead of 'MimeType':

types = notFromMe.groupby(['MediaSubtype'])

Attachment Types by %

That cleaned things up a lot. But we still have the second question from the last post: what's behind octet-stream?

Application/octet-stream is basically the generic binary file option; most likely the original client which uploaded the file didn't specify the type. But we can make an educated guess about the type based on the file name extension, where we have it. So we'll write a quick function which takes a row of data and, if the Media Subtype is 'octet-stream', returns the file name extension from the FileName column:

import os.path

...

def filetype(row):
    if not(isinstance(row['ContentTypeName'], str)):
        return ''
    if row['MediaSubtype'] == 'octet-stream':
        return os.path.splitext(row['ContentTypeName'])[1]
    return row['MediaSubtype']

We can run that function against our data and put the results in a new column which we'll call 'FileType':

notFromMe['FileType'] = notFromMe.apply(lambda row: filetype(row), axis = 1)

Now, instead of grouping by MediaSubtype, we just group by FileType. This isn't perfect - some of our data is getting discarded because there's not enough info between Media Subtype and FileName to figure out what kind of attachment it is. But the data is mostly good, and gives us a much more useful chart:

Attachment Types by %

I'm also running this chart with a threshold of 0.02 for the 'other' section, to clean up the less-frequent file types. The whole script can be found here.

So, if I'm looking to downsize my Gmail backup, I should probably concentrate on JPEGs, videos (wmv and mpeg), and PDFs.