Want to join a conference call to discuss more about these thoughts? Email Arman at Arman.Tabatabai@techcrunch.com to secure an invite.
Hello! We are experimenting with new content forms at TechCrunch. This is a rough draft of something new. Provide your feedback directly to the authors: Danny at danny@techcrunch.com or Arman at Arman.Tabatabai@techcrunch.com if you like or hate something here.
Harari on technology and tyranny
Yuval Noah Harari, the noted author and historian famed for his work “Sapiens,” wrote a lengthy piece in The Atlantic entitled “Why Technology Favors Tyranny” that is quite interesting. I don’t want to address the whole piece (today), but I do want to discuss his views that humans are increasingly eliminating their agency in favor of algorithms that make decisions for them.
Harari writes in his last section:
Even if some societies remain ostensibly democratic, the increasing efficiency of algorithms will still shift more and more authority from individual humans to networked machines. We might willingly give up more and more authority over our lives because we will learn from experience to trust the algorithms more than our own feelings, eventually losing our ability to make many decisions for ourselves. Just think of the way that, within a mere two decades, billions of people have come to entrust Google’s search algorithm with one of the most important tasks of all: finding relevant and trustworthy information. As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space. People ask Google not just to find information but also to guide them around. Self-driving cars and AI physicians would represent further erosion: While these innovations would put truckers and human doctors out of work, their larger import lies in the continuing transfer of authority and responsibility to machines.
I am not going to lie: I completely dislike this entire viewpoint and direction of thinking about technology. Giving others authority over us is the basis of civilized society, whether that third-party is human or machine. It’s how that authority is executed that determines whether it is pernicious or not.
Harari brings up a number of points here, though, that I think deserve a critical look. First, there is this belief in an information monolith, that Google is the only lens by which we can see the world. To me, that is a remarkably rose-colored view of printing and publishing up until the internet age, when gatekeepers had the power (and the politics) to block public access to all kinds of information. Banned Books Week is in some ways quaint today in the Amazon Kindle era, but the fight to have books in public libraries was (and sometimes today is) real. Without a copy, no one had access.
That disintegration of gatekeeping is one reason among many why extremism in our politics is intensifying: there is now a much more diverse media landscape, and that landscape doesn’t push people back toward the center anymore, but rather pushes them further to the fringes.
Second, we don’t give up agency when we allow algorithms to submit their judgments on us. Quite the opposite in fact: We are using our agency to give a third party independent authority. That’s fundamentally our choice. What is the difference between an algorithm making a credit card application decision, and a (human) judge adjudicating a contract dispute? In both cases, we have tendered at least some of our agency to another party to independently make decisions over us because we have collectively decided to make that choice as part of our society.
Third, Google, including Search and Maps, has empowered me to explore the world in ways that I wouldn’t have dreamed before. When I visited Paris the first time in 2006, I didn’t have a smartphone, and calling home was a $1/minute. I saw parts of the city, and wandered, but I was mostly taken in by fear — fear of going to the wrong neighborhood (the massive riots in the banlieues had only happened a few months prior) and fear of completely getting lost and never making it back. Compare that to today, where access to the internet means I can actually get off the main tourist stretches peddled by guidebooks and explore neighborhoods that I never would have dreamed of doing before. The smartphone doesn’t have to be distracting — it can be an amazing tool to explore the real world.
I bring up these different perspectives because I think the “black box society,” as Frank Pasquale calls it, is under unfair attack. Yes, there are problems with algorithms that need addressing, but are they worse or better than human substitutes? When eating times can vastly affect the outcome of a prisoner’s parole decisions, don’t we want algorithms to do at least some of the work for us?
Lying to ourselves
Talking about humans acting badly, I wrote a review over the weekend of The Elephant in the Brain, a book about how we use self-deception to ascribe better motives to our actions than our true intentions. As I wrote about the book’s thesis:
Humans care deeply about being perceived as prosocial, but we are also locked into constant competition, over status attainment, careers, and spouses. We want to signal our community spirit, but we also want to selfishly benefit from our work. We solve for this dichotomy by creating rationalizations and excuses to do both simultaneously. We give to charity for the status as well as the altruism, much as we get a college degree to learn, but also to earn a degree which signals to employers that we will be hard workers.
It’s a depressing perspective, but one that’s ultimately correct. Why do people wear Stanford or Berkeley sweatshirts if not to signal things about their fitness and career prospects? (Even pride in school is a signal to others that you are part of a particular tribe.) One of the biggest challenges of operating in Silicon Valley is simply understanding the specific language of signals that workers there send.
Ultimately, though, I was nonplussed with the book, because I felt that it didn’t end up leading to a broader sense of enlightenment, nor could I see how to change either my behavior or my perception’s of others’ behaviors as a result of the book. That earned a swift rebuke from one of the authors last night on Twitter:
Okay, but here’s the thing: of course we lie to ourselves. Of course we lie to each other. Of course PR people lie to make their clients look good, and try to come off as forthright as possible. The best salesperson is going to be the person that truly believes in the product they are selling, rather than the person who knows its weaknesses and scurries away when they are brought up. This book makes a claim — that I think is reasonable — that self-deception is the key ingredient — we can’t handle the cognitive load of lying all the time, so evolution has adapted us to handle lying with greater facility by not allowing us to realize that we are doing it.
Nowhere is this more obvious than in my previous career as a venture capitalist. Very few founders truly believe in their products and companies. I’m quite serious. You can hear the hesitation in their voices about the story, and you can hear the stress in their throats when they hit a key slide that doesn’t exactly align with the hockey stick they are selling. That’s okay, ultimately, because these companies were young, but if the founder of the company doesn’t truly believe, why should I join the bandwagon?
Confidence is ambiguous — are you confident because the startup truly is good, or is it because you are carefully masking your lack of enthusiasm? That’s what due diligence is all about, but what I do know is that a founder without confidence isn’t going to make it very far. Lying is wrong, but confidence is required — and the line between the two is very, very blurry.
Spotify may repurchase up to $1B in stock
Before the market opened this morning, Spotify announced plans to buy back stock starting in the fourth quarter of 2018. The company has been authorized to repurchase up to $1 billion worth of shares, and up to 10 million shares total. The exact cadence of the buybacks will depend on various market conditions, and will likely occur gradually until the repurchase program’s expiration date in April of 2021.
The announcement comes on the back of Spotify’s quarterly earnings report last week, which led to weakness in the company’s stock price behind concerns over its outlook for subscriber, revenue and ARPU (Average Revenue Per User) growth, despite the company reporting stronger profitability than Wall Street’s expectations.
After its direct-offering IPO in April, Spotify saw its stock price shoot to over $192 a share in August. However, the stock has since lost close to $10 billion in market cap, driven in part by broader weakness in public tech stocks, as well as by fears about subscription pricing pressure and ARPU growth as more of Spotify’s users opt for discounted family or student subscription plans.
Per TechCrunch’s Sarah Perez:
…The company faces heavy competition these days – especially in the key U.S. market – from Apple Music, as well as from underdog Amazon Music, which is leveraging Amazon’s base of Prime subscribers to grow. It also has a new challenge in light of the Sirius XM / Pandora deal.
The larger part of Spotify’s business is free users – 109 million monthly actives on the ad-supported tier. But its programmatic ad platform is currently only live in the U.S., U.K., Canada and Australia. That leaves Spotify room to grow ad revenues in the months ahead.
The strategic rationale for Spotify is clear despite early reports painting the announcement as a way to buoy a flailing stock price. With more than $1 billion in cash sitting on its balance sheet and the depressed stock price, the company clearly views this as an affordable opportunity to return cash to shareholders at an attractive entry point when the stock is undervalued.
As for Spotify’s longer-term outlook from an investor standpoint, the company’s ARPU growth should not be viewed in isolation. In the past, Spotify has highlighted discounted or specialized subscriptions, like family and student subscriptions, as having a much stickier user base. And the company has seen its retention rates improving, with churn consistently falling since the company’s IPO.
The stock is up around 1.5 percent on the news on top of a small pre-market boost.
What’s next
- We are still spending more time on Chinese biotech investments in the United States (Arman wrote a deep dive on this).
- We are exploring the changing culture of Form D filings (startups seem to be increasingly foregoing disclosures of Form Ds on the advice of their lawyers).
- India tax reform and how startups have taken advantage of it.