No company has seen its fortunes rise quite so high amidst the COVID-19 crisis as video conferencing platform Zoom. As families, friends, and organizations distanced from one another, they sought new digital means to stay in touch. Zoom set itself apart from more established names like Skype or FaceTime for simply being easier to use. The Zoom team made joining meetings frictionless and delivered unmatched call quality. No wonder “Zoom” has become the proprietary eponym of choice for videoconferencing. (That the silly backgrounds added some levity in an anxious situation didn’t hurt, either.)

Along with popularity came closer scrutiny. Stories soon spread about Zoom’s neglect of its users’ privacy and security. Among the transgressions? Installing security bypasses to streamline installation. Bugs that enabled hackers to hijack users’ cameras. A privacy policy that granted the right to use video from private calls in marketing collateral. They’ve drawn the attention of New York’s Attorney General, as well.

I’m not going to list all of Zoom’s security and privacy problems. Others have already done so, with more expertise than I could muster. (I recommend Doc Searls’ series on Zoom’s data harvesting policies and Micah Lee and Yael Grauer on Zoom’s uncommon interpretation of “end-to-end encryption.”)

But, I do want to suggest that Zoom can serve as a good case study for why strategy needs to be more than identifying outcomes and tactics. Strategy’s also about values. As the Zoom story illustrates, businesses and teams have an important ethical obligation to their customers and users. Strategy needs to play a role in honouring that.

Strategy creates focus

As Simon Pitt points out, any person-to-person communications software comes with challenges for user adoption. It’s not enough to convince one person to use it. You need to get at least two users who want to talk to one another to adopt the technology. If you’re talking about a team, your challenge is even bigger. Any friction at all can derail the whole process.

Plus, there’s a cost to switching. It’s easier to stick with comfortable, familiar tools like Slack, email, or even the phone. (Of course, the corollary of this is that, once you’re in, you’ve got it made. Social pressure is powerful. Just ask Facebook.)

So, when Zoom writes in a blog post that “our customers have told us that they choose Zoom for our frictionless video communication experience,” I believe them. One can imagine a conversation around Zoom’s whiteboard: “Our research discovered that a complicated installation is a major barrier to adoption. How might we streamline that?”

I suspect that when Zoom’s product team prioritized their desired outcomes, streamlining adoption was at the top of their list. You can see many of the other features for which Zoom is famous—including the silly backgrounds and even the potato filter that gets so many people talking—cascading from that strategic focus.

Strategy is more than a metric

In the words of A.G. Lafley, Roger L. Martin, and L.J. Ganser, strategy “requires making explicit choices—to do some things and not others—and building a business around those choices.” Zoom, it seems, decided to grow its user base as quickly as it could.

To achieve this growth, Zoom had to make trade-offs. Directing your resources at one target means you’re pulling them away from another. Unfortunately for the millions of users who have signed up in the last few weeks, Zoom seemingly chose to back off on security.

The Zoom case makes visible the dark underbelly of outcome-based product strategy. Yes, it’s critical to have measurable outcomes to shoot for. But what gets measured gets manipulated.

When strategy is distilled down to the relentless pursuit of one key metric—whether it’s Net Promoter Score, new user sign-ups, or conversions—there’s the risk that the metric itself becomes the goal. That’s wrong. A key performance indicator is just that: an indicator. It signals progress. It isn’t progress by itself.

But, when the KPI is treated does become the goal, metric-obsessed teams will pursue that number by any means necessary. That’s how you get manipulative dark patterns, hacked-together spaghetti code, and gaps in security. This is what Simon Sinek in The Infinite Game calls “ethical fading”: the acceptance of minor transgressions to achieve a short-term goal at the cost of a good. Many small things start to add up to big things. It’s not necessarily malicious. But, it is neglectful and possibly harmful.

Strategy makes the right choice clear

A good strategy should serve as a guardrail against this.

If the strategy has done its job, these kinds of transgressions should be prevented, or at least limited, by a clear statement of vision. A good vision statement governs the activities and actions your organization, your team, and your product will and will not engage in. Ideally, it points to the broader positive impact you will make on the world—the hallmark of what Sinek calls an “infinite mindset.” It should also limit the temptation of underhanded or unethical quick fixes. Strategy is about making choices. More than that, it should make the right choice the obvious one.

The vision statement is critical for that. But it needs to be something more than a hollow marketing statement that sounds good on the website. Zoom, after all, says that its “value is to care. We care for our customers, employees, company, community, and selves.” But companies need to follow words like those need to with action. Strategy should translate vision into action. Instead, somebody at Zoom prioritized letting users make themselves look like potatoes over basic security.

Strategy is ethical

It’s critical to have short-term goals that signal whether progress is being made. Measurable outcomes are good things for product teams.

But the pursuit of these goals needs to be tempered by clearly articulating drawing the lines you will not cross. These often go unstated, maybe because we don’t want to admit that they need to said outright. We’d rather assume that they don’t need to be said—that the social contract we have is strong enough that we don’t have to remind ourselves of the duty we have to our customers and our users.

This is part of why the Zoom case is especially troubling: it shows just how much trust we place in software developers to take care of technical security challenges we can’t possibly wrap our heads around, to defend us against bad actors we could never understand. It’s a violation of our belief that whomever is responsible for the product will act in good faith to protect our basic digital rights.

We can’t expect users or customers to understand what they need from security. We can’t expect them to understand why they might want end-to-end encryption. Nor can we expect them to understand fully the implications of a loose privacy policy. Heck—we can’t even expect people to act rationally most of the time. How can we ask them to make logical, informed decisions in the face of products designed at every turn to nudge them closer to a predetermined outcome?

In Zoom’s defence, they moved quickly to adjust their privacy policy and their security practices. But this is bigger than Zoom. Time and time again we find that our faith in tech companies might be unfounded or idealistic. As long as companies are rewarded by the market for skirting what should be fundamental protections, they’ll keep doing it.

Maybe Mike Monteiro is onto something.