“Among the machine learning strategy consultations you’ve done, which kinds of product team were the most challenging to work with?”
After consulting on hundreds of machine learning projects, I’ve learned to pay attention to early warning signs that the client is in danger of shooting themselves in the foot. Here are my top three:
- They’re marketing victims with unrealistic expectations
* Special case: Willing to launch at all costs
* Special case: No data (and other basic requirements) - There’s a lack of respect for skills diversity
* Special case: Toxic snobbery - The team has no idea who’s in charge
* Special case: The pointy-haired boss
Details below. (If you find any of my writing useful, the perfect thank you is a retweet.)
#1 Marketing victims with unrealistic expectations
There’s a lot of hype and nonsense about AI out there, so when teams come to me straight from the latest As-Seen-On-TV session (“…but if you call now, we’ll throw in 50 free deep neural networks which you can compose into an unholy ensemble!”) I know I’m going to have to undo all kinds of unrealistic expectations.
Science fiction is a terrible teacher… but an ace of sales.
If you think ML/AI is magical, you’re in danger of making project decisions that are equivalent to speeding without a seat belt. Everyone take a deep breath!
ML/AI is more about giving your team an alternative approach to programming, which suits some (not all!) tasks much better than the traditional approach. This is both boring and very useful… which is exactly how I like it. A team that is sober, er, I mean, motivated by value instead of dazzle is much more likely to succeed at AI.
Special case: Willing to launch at all costs
Sometimes, the team doesn’t care about solving real business problems with ML/AI. Instead, they in it for the optics: they want to be able to say flashy things about their “AI-powered business.” They don’t have any standards and they don’t care if it works. They just want to launch some hot AI.
The trouble is, they’re not honest about their goal. If they were, I’d help them find an easy toy application that doesn’t touch anything too important and won’t hurt anyone if it’s rubbish. A homework project for adult students, essentially. I’d even throw in a couple of ideas about how to pick an impressive sounding one that’s still harmless. Everyone would be a winner.
A homework project for adult students, essentially.
Alas, in my experience, such teams rarely fess up until it’s too late. Sometimes they don’t even admit it to themselves. Instead, they pick a mission-critical (read: impressive) application but willfully mishear any talk about making sure their system is performant and reliable. (“Aww, making it work properly is so tedious. I’ll just set the performance bar after the fact.”)
Everything would be better if teams were honest (at least with themselves) about their true goals for ML/AI projects and structured their strategy around those. Add the marketing dazzle later.
Special case: No data (and other basic requirements)
Since machine learning is magic and everyone’s doing it, you can too! Even if you have no data! Right? Wrong.
There are a bunch of conditions your project needs to meet before you can seriously consider ML/AI. Go through this 22-item checklist to see if your project is a nonstarter. Think of it as your free ML/AI-strategy-consultant-in-a-box.
#2 No respect for skills diversity
Another sign that a project headache is coming on is a lack of respect for skills diversity. In fact, let me tell you my least favorite question of all time. It’s: “How would you describe the ideal AI person?”
Don’t fall into the trap of over-focusing on roles that come into play in the middle of the process.
I dislike the question vehemently because it assumes that applied ML/AI is a game for a solo cowboy, when it’s actually a team sport. No single worker has all the skills required for success. It takes a village to create AI solutions (especially if we’re talking about mission-critical solutions at scale) and if your team is missing any of the required skills, the whole thing will collapse. If you have no ability to work with interdisciplinary teams, you’ll have problems applying AI… because that’s what it takes to get the job done.
If you keep trying to hire one person to do everything, no wonder you’re complaining of a talent shortage!
I find that it takes much more of my patience and energy to advise folks who think that workers just like them are the best and smartest (and everyone else is an idiot). I’ve run into AI researchers who seem to think that by diversely-skilled workers I mean people with CS/AI PhDs from different advisors. I’ve met teams made up entirely of engineers who had no idea why it might be better if the user experience design bits were handled by a UX professional. (You think you know design because you can code? What is this, the 90s?)
Be sure to celebrate skills diversity in your specialists and ensure that everyone on your team is at least somewhat clued into the basic sketch of what everyone else on the team does. Learn more about the roles on typical AI projects here: [1], [2], [3], [4].
Special case: Toxic snobbery
The AI industry is rife with a special kind of bad apple, the bully who splits AI workers into two categories: legitimate AI participants like themselves and barely-necessary appendages. Yes, I’m looking at you, AI researchers and engineers. Many of you are lovely people who are eager to embrace the diverse contributions of colleagues with backgrounds unlike your own (bonus points if you put in the mentoring time to bring them up to speed on the bits they’re struggling with), but some of you sit there grumbling that people without a PhD should get off your AI lawn.
Please stop.
A much better strategy would be to embrace skills diversity, teach everyone to appreciate the different roles, work together effectively, and encourage mutual respect. Learn to admire the things your colleagues are good at, even if these things are orthogonal to your own skillset. If you won’t do it for the sake of being a nice person, do it for the sake of project success. Collaboration with those who are different from us makes us all stronger. And if that sounds like fluff-covered-bleeding-heart nonsense, let me refer you instead to the coldest bastion of heartless rationality: your economics textbook. Open it up to the chapter on comparative advantage and start reading… it’ll say exactly the same thing (with math, since common sense seems not to work on you).
There are many different kinds of legitimate participants in the applied AI space and — depending on its difficulty — your project might need them all. Even if you find a magical unicorn who has every skill (someone who is simultaneously the perfect engineer, the perfect machine learning researcher, the perfect UX designer, the perfect statistician, the perfect product manager, the perfect team leader, the perfect analyst, the perfect domain expert, the perfect business decision-maker, and everything else… all in one body), there will be too much work for them to do alone and they’ll be forced to specialize. So accept the specialists right off the bat! You wouldn’t insist on hiring olympic gold-medalists to move your furniture for you, right? Well, don’t do the tech equivalent. Refusing to respect an ethicist’s opinion about ethics unless they have a second PhD in computer science (true story) is daft, get over yourselves.
Instead of trying to do it all yourself, learn how to check whether the baton was passed between your teammates correctly.
To work in applied ML, you don’t need to be a machine learning professor. You only need to be a wizard at your part of the process and have a good understanding of what your team members do (though not how to do it yourself). For example, your machine learning engineer needs to be able to understand requests coming from business leaders, they need to be able to evaluate when those folks are doing their part of the job correctly, and they should bottleneck the process if bad requests are coming in. They shouldn’t just succumb to working on ill-advised, impulsive nonsense.
When there’s a labor shortage, people will step up to fill it. Take a moment to think about what kind of personalities are less likely to be deterred by toxic snobbery.
Oh, and while I’m on the topic, there’s another effect of toxic bullying and snobbery: sane beginners are chased out — crushed by crippling impostor syndrome instead of coached by you — while shameless beginners are undeterred. That’s something my friends and I jokingly refer to as the second impostor syndrome of AI (survival favors real impostors, so a disproportionate fraction of people who stay in the game are buzzword slingers who don’t even understand what they don’t understand). The Dunning-Kruger is getting so ugly it resembles Freddie Krueger. I’d also hazard a guess that it’s not doing the field’s diversity efforts any favors.
#3 The team has no idea who’s in charge
When the project lacks a skilled leader, sponsor, and decision-maker, there’s a huge chance that the team’s effort is wasted… or worse.
“Where are your grown-ups?” is what every good AI strategy consultant is screaming on the inside when everyone except the product leader / decision-maker shows up to the first meeting.
If there’s clarity about who’s in charge, we’re in luck. Perhaps the folks who’ve left their boss at home are doing a bit of exploration/prototyping to save their decision-maker’s time (commendable!) or perhaps the team has already progressed past step 1 (even better) and are looking for advanced advice.
Unfortunately, quite often the folks seeking my advice haven’t considered who would be responsible for the decision-maker’s role in the project. They’re hoping to sprinkle some machine learning magic pixie dust over their work because all their friends are doing it. Uh-oh! It turns out that the decision-maker plays the most important role in the project — learn more here — so you shouldn’t start without one. Even if you come up with something great, there’s a big chance your efforts go to waste.
Special case: The pointy-haired boss
A special case of this scenario is the one where the “leader” is incompetent. That situation has all the fun of the absentee decision-maker setting with the added bonus of inmates attempting to trick the asylum into handing them the keys. Skilled leadership is even more important in ML/AI than in traditional software projects. To learn more, see the article in which I call the pointy-haired boss in AI “one of the four horsemen of the apocalypse.”
Original article can be found here.