If there’s something they don’t teach you in school, then it’s the “human” side of your future career. In marketing classes, you learn how to figure out how to reach people, what to say to them, how to figure out what they want, and whether or not a marketing program is worth it. In finance classes, you learn how to turn the next 10 years of a company’s or project’s “probable” performance into a single number that means either “invest” or “ignore.” In Engineering classes, you learn how to make kickass robots or something.
On the job, however, there’s routinely something in the way–something that redirects our marketing program at the “wrong” people, something that second guesses our valuation of a company, something that transforms our kickass robot into a somewhat interesting home appliance. They’re called people. And, truth is, their influence is largely positive, despite early-career feelings of the contrary. Assuming the human side of your job is constructive instead of corrosive (also possible), these people have caught our mistakes, mitigated our oversights, focused our ideas, and augmented our original contribution. Our original idea is now better than it was before.
That’s all peaches and rosewater–the power of teamwork, catalytic trust, construction through criticism and other what-have-yous.
Being human and being surrounded by them isn’t all productivity and optimization, however. There’s at least one aspect of the human side of The Career that is more often than not a shame and better off avoided entirely. It involves our words or, more specifically, that understanding them means interpreting them, and interpretation is by definition subjective and biased. People routinely receive messages others never sent, is what I’m trying to say. Here are 5 of the most hilarious words at the heart of this problem. Buckle up.
Recommendation
Many, when “recommendation” enters their ear holes, hear the word “mandate” or “directive,” instead. They perceive a loss of control over their own programs, and light their torches for the coming battle. I’d believe you if you told me that some people do use the word “recommendation” to soften a directive or a mandate. And, I’d agree if you said that never helps anyone. In most productive relationships, however, “recommendation” means “ideas that you may find helpful.” Take these relationships, for example:
– Consultant/Executive
– Agency/Client
– Analyst/Program Manager
– Adviser/Decision Maker
In these relationships, the consultant, agency, analyst or adviser has done some work for the executive, client, program manager or decision maker in order to help them solve a problem. After the work is done, the consultant/agency/analyst/adviser has perspective on the issue. They apply this new perspective to their expertise and come up with a list of recommendations meant to help the executive/client/program manager/decision maker solve the original problem. The expectation is that one of these things happen:
– Springboard: The decision maker doesn’t want to implement the recommendations as they are, but the recommendations do inspire new ideas that the decision maker thinks will work.
– Catalyst: The decision maker likes the recommendations and has some ideas of their own to make them even stronger.
– Solution: The decision maker likes the recommendations and wants to implement them as-is.
If the decision maker can’t do one of those three things, then the analyst can cry bitter nerd tears into their copy of Web Analytics 2.0. Seriously, I’m worried that one day I won’t even be able to read it, anymore.
Improvement
This one tends to hit a little harder than “recommendation.” When this word is misunderstood, it seems to mean something like “you suck” or “what you have here is garbage.” In my past, when I’ve played the analyst role and used this word, doing so has begged questions like “so, what exactly went wrong?” or “why don’t you think this was a good approach?”
In reality, an improvement is just a thing that makes another thing better. This means that the improved thing could have been “good” before the improvement, and the improvement is designed to make it “good + 1.”
The mindset that suggests the improvement believes that continual improvement is the mark of a successful or “good” program. It’s the reason why asking an analyst if a metric’s value is “good” or “bad” will turn the analyst’s face into a blank slate if your program doesn’t have its history documented. The “good” value of a key performance indicator is the one that’s better than it was last time.
Anyway, if the program’s outcome rate isn’t 100%, then outcome rate is a candidate for improvement–even if it’s 85%. If bounce rate isn’t 0%, then it’s a candidate as well. Any given program likely has a handful of improvement candidates at any given point in time. In fact, if an analyst ever reports back with “wow, this program is really great–keep it up!” without “and here’s how to make it better,” and the program didn’t just secure world domination for your company, then they’re either buttering you up, or their analysis was lazy. A particularly powerful approach I’ve found in my analyst roles is honing in on one single candidate for improvement and recommending the hell out of solutions for that candidate. Brand recall was 86%? How do we push it up to 90%? Typically, what makes an improvement candidate worth an obsession are that a) it makes the program significantly better and b) it’s easy to implement. Of course you’d make the easy improvements, first. Of course.
Next time the program is run, the program manager has made one significant improvement and is in a position to identify a new set of improvement candidates. This assumes that the program is more or less achieving the goal(s) the program manager set for it. Otherwise, you’re looking for fixes, not improvements.
The reason this tends to work is that improvements don’t occur in vacuums. When you improve one aspect of a program, it’s possible that…things…happen to the program’s other aspects. Implementing a batch of apparently separate improvements at one time may not have the effect the program manager expected.
Traditional
If you’re working at a company at which “traditional” is a bad word, then you’re celebrating far more than lamenting the misunderstandings. Sometimes, however, “traditional” is a word worth using, and that’s because its meaning is relative. Just like it is with every other adjective in every language. Compared with programmatic ad buying, negotiated/direct buying is traditional, even if the sellers are mobile apps and the format is rich media. Nothing wrong with that.
If you’re having a discussion about programmatic buying, and the program manager is unhappy that there won’t be an account manager on the seller side working with them, then what they prefer is a more traditional means of ad buying. Nothing wrong with that. Traditional is known, and known comes with its own set of benefits–security, feasibility and speed are usually 3 of them.
So, “traditional” can mean “the geriatric way” when it’s misunderstood. In the worst of cases, that’s exactly what the speaker means. In a productive relationship, it’s just a way to say “the usual way.” Nothing more.
This one really isn’t a very big deal, though. It’s just as easy to use other words that don’t describe an idea’s age (“He’d rather negotiate ad placements” may even be more communicative than “He’d rather buy ads the traditional way”) so it’s rare that “traditional” is a word that needs to be used.
Problem
Solving problems is arguably what anyone who’s good at their job does for a living. Gophers are problems for Groundskeepers. An imperfect record is a problem for a Football Coach. CO2 emissions, dependency on oil and a need for more speed are problems for Engineers (in a fantasy world without the auto industry, I mean). Their company not possessing all of the world’s disposable income is a problem for Marketers. That the entire company can’t immediately retire happily is a problem for Program Managers.
You get what I mean.
So, it’s a bit of a wonder to me when some colleagues flinch at the word “problem,” especially since so many people openly pride themselves on their “problem-solving skills” when they’re not in the middle of solving one. Without problems, there’s nothing to solve. Without something to solve, there’s no work to be done. Without work to be done, there’s no need for employment. Without employment…wait a minute…
Kidding aside, there’s nothing wrong with a program if it still presents problems to solve. That’s expected. At least that’s the case when you’re talking to an analyst. Only you can be sure what your boss means when they point out problems with your work.
Data (also: Insight)
At least as far as I’ve seen, doing business and doing science are two different things. When one sciences, they’re looking for knowledge–knowledge that gives reality just a little more definition. They (and especially their peers) hold their results to really high standards. There are 7 ways scientific results can be valid (correct) and 5 ways they can be reliable (reproducible). Even in social science, in which statistical analysis and interpretation tend to play stronger roles than experimentation, the researcher’s methods, strategy and analysis techniques will all be scrutinized before their results are allowed to leave the notebook. “Are you measuring what you think you’re measuring?” and “Are you measuring something general or esoteric?” are at the heart of much of the scrutiny. Data says something concrete to the researcher and their peer community, so there are such things as “good data” and “bad data.”
Business is different. Experimentation and data analysis do technically thrive in the most successful business programs. What doesn’t tend to matter nearly so much are the validity and reliability of those experiments and analyses. It doesn’t matter if we can’t be sure we’re measuring exactly what we think we’re measuring. It doesn’t matter if our program’s results can’t be generalized to describe the likely results of other programs. This is because, rather than seeking knowledge that helps us understand reality, we’re seeking insight that helps us make a decision. Data in this case are only as good as the decisions they inspire.
While a scientist cares about single measurements (10% of the population hate cookies really needs to mean that 10% of the population hate cookies), a business person cares only about trends and segments. It doesn’t matter that 0.75% of a website’s visitors clicked our banner ad this month. What matters is that 0.68% clicked last month and 0.50% the month before that. Or what matters is that our ad achieved 0.75% while another ad grabbed a full 1.0%. We don’t know how many of that 0.75% were bots, or page refreshes, or repeat visits, or visits from our coworkers because they were testing the ad at a coffee shop and our analytics software doesn’t filter the shop’s IP address. 0.75% is meaningless. When we say we’re measuring an audience’s response to our ad, we’re not sure that that’s actually what we’re measuring. And that’s not a big deal.
Three months of traffic increases summing to 50% does matter, though. This is especially true if we were split testing ad elements each month. The incremental increases suggest to us that the changes were improvements, and give us hints about what tends to inspire action among our audience. Here’s the catch, though–which hints the data are sending depends upon the person doing the analysis. One analyst sees a trend toward “friendlier” headlines each month, and then attributes the increasing effectiveness to friendly language. Another sees a pattern toward increased clarity–each headline was shorter than the last and spelled out benefits which were more concrete each time. Each will offer a different recommendation to their program manager at the end of an ad run–one will say “friendlier language” while the other will say “be more specific about benefits.” You won’t know which recommendation was more insightful unless you test them both.
And that’s when data are truly playing their role in business. Data isn’t the language of some behavioral code–it doesn’t tell you what to do. It whispers suggestions about what’s happening, and the human analyzing it has to turn those whispers into insight that fuels a recommendation. Data collection and analysis enables what people in suits call “data-driven decision making.” In terms of the benefit the approach affords a program manager, it enables “systematic decision making.” The program manager can make decisions with purpose–when one doesn’t work, they a) know it and b) can pretty easily figure out a way to make a different, educated one.
It’s a little bit of a shame, then, when people hear “data-driven decision making” and interpret “automated decision making.” Some such misunderstandings inspire strong resistance to analytics programs. Others inspire crippling dependence on them.