As I mentioned in my first post in this series, the central purpose of Data Science is to find patterns in data and use these patterns to make useful predictions about the future. It’s this predictive part of Data Science which gives the discipline its mystique; even though Data Scientists actually only spend a relatively small fraction of their time on this area compared to the more workaday activities of loading, cleaning and understanding the data, it’s the step of building predictive models which unlocks the value hidden within the data.
Every September in France, summer-weary parents pack their children off to school for la rentrée (‘the return’) and return to work after the idleness of August. The break from the métro, boulot, dodo routine of daily life enables both students and their parents to throw themselves back into their work and studies with renewed vigor. … Read more
Ah, GDPR. Like the guy (or girl) you matched with on Tinder six months ago who got less interesting the more you got to know them, it just won’t go away. It keeps sliding into your DMs with teasing headlines like, “Data Protection Authority of Baden-Württemberg Issues First German Fine Under the GDPR” or “Washington Post offers invalid cookie consent under EU rules“. And there you were thinking you were done with it back in May, when you sent all your users that “Please respond to this email to stay on our mailing list” email and threw that giant banner about cookies up on your website.
Ask any Data Scientist and they will tell you that the process of ‘wrangling’ (loading, understanding and preparing) data represents the lion’s share of their workload – often up to as much as 80%. However, that number is not as alarming as it may at first seem. To understand why, let me tell you about my living room.
Ask an Analyst, particularly a Digital Analyst, how they’d like to develop their career, and they are quite likely to tell you that they want to get into Data Science. But in fact the two disciplines (if they can even be described as separate disciplines) overlap considerably – some would even say completely. So what is the difference between Analytics and Data Science?
There’s a lot of buzz about Data Science these days, and especially its super-cool subfield, Machine Learning. Data Scientists have become the unicorns of the tech industry, commanding astronomical salaries and an equal amount of awe (and envy) to go with them. Partly as a result of this, the field has developed something of a mystical aura – the sense that not only is it complex, it’s too complex to explain to mere mortals, such as managers or business stakeholders.
It’s true that mastery of Data Science involves many complex and specialized activities, but it’s by no means impossible for a non-Data Scientist to build a good understanding of the main building blocks of the field, and how they fit together.
The relentless rise of social networks in recent years has made many marketers familiar with the concept of the social graph—data about how people are connected to one another—and its power in a marketing context.
Facebook’s social graph has propelled it to a projected annual revenue of around $40B for 2017, driven primarily by advertising sales. Advertisers are prepared to pay a premium for the advanced targeting capabilities that the graph enables, especially when combined with their own customer data; these capabilities will enable Facebook to snag over 20% of digital ad spend in the US this year.
Partly as a result of this, many marketers are thinking about how they can exploit the connectedness of their own customer base, beyond simple “refer a friend” campaigns. Additionally, it’s very common to hear marketing services outfits tack the term graph onto any discussion of user or customer data, leading one to conclude that any marketing organization worth its salt simply must have a graph database.
But what is a graph, and how is it different from a plain old customer database? And if you don’t have a customer graph in your organization, should you get one?
At the end of the nineteenth century, electricity was starting to have a profound effect on the world. As dramatized in the excellent novel The Last Days of Night, Thomas Edison battled with George Westinghouse (the latter aided by Croatian genius/madman Nikola Tesla) for control over the burgeoning market for electricity generation and supply. The popular symbol of the electrical revolution is of course Edison’s famous light bulb, but almost more important was the humble electric motor.
Garry Kasparov will forever be remembered as perhaps the greatest chess player of all time, dominating the game for almost twenty years until his retirement in 2005. But ironically he may be best remembered for the match he failed to win twenty years ago in 1997 against IBM’s Deep Blue chess computer. That watershed moment – marking the point at which computers effectively surpassed humans in chess-playing ability – prompted much speculation and hand-wringing about the coming obsolescence of the human brain, now that a mere computer had been able to beat the best chess grandmaster in the world.