My Bachelor was a dual-study in Business Administration and Advertising. Along with Engineering and Journalism, they’re probably two of the most practical academic programs on the planet. That is, they teach you things like these:
1. A fast 80% is often better than a slow 100%
2. Data analysis only needs to inform a decision, not yield The Truth
3. The best idea is the one you can get people behind
Arguably, those insights are really good fuel for success in business. Those programs were exactly what I’d hoped they would be.
After Bachelor, I went to work in Marketing for a large tech company. Those three insights above and a host of others resurfaced again and again on the job. My teachers seem to have known what they were talking about.
My Master’s program is more scientific (put your torches down, natural scientists–I just said “more scientific”). This time around, we’re learning how to beat a dead data horse into the ground in order to conclude exactly how right we can claim we are at the end of a sociological study. You’re only 80% sure that your study didn’t yield a false-positive result? Get back to your political polling, noob. It’s exactly what I’d hoped it would be.
I can stop there, for now. The point of this post is obvious–a Master’s program taught by scientists is a lot different than a Bachelor’s program taught by industry veterans. Here’s how:
1. “How right?” vs. “Not obviously wrong”
In the sciences, your audiences will usually be wondering how right you are–how comfortably they can consider your results an insightful description of something real. The quantitative side of this is obvious–which test did you run, and how significant are its results? There’s a number there. That’s pretty much the end of the story.
There’s a qualitative side, too. “Which model did you assume?” is one qualitative question. If you assume a linear relationship between a set of explanatory variables and an outcome variable, but the scientific community really thinks the relationship should be exponential, then your study’s a bust. It won’t get published, and your results will never enter conventional wisdom. Theoretically, the correct answer to “how right?” is “100%.” However, I don’t think any scientist will expect that any results are 100% correct, even if the data suggest that they are. Instead, you want to be 90-100% correct. Unless you’re into theory generation, in which case you’re a salesperson.
In school, this means that a successful project results in your thorough understanding of what’s already known. For instance, I’m taking a class called “Value Chain Economics.” It’s microeconomic theory applied to optimizing an industrial chain from start to finish–from raw materials gathering to finished good shipping. Our capstone project is exactly what you’d expect–analyze a value chain, identify its problems, and recommend a solution to one of them. In a business or ad program, we would be expected to gather as much insight as possible and then craft an original solution. Inductive logic usually feeds original solutions, so we’d be applauded for reasonable leaps in logic. In this program, however, one of the questions our graders are going to have is “how can you be sure this will work?” (contrast with a business professor’s “Why do you think this will work?”). This makes looking for case studies that document a solution to the problem a viable strategy.
“How right” rarely seems to matter in business, though. “Not obviously wrong” is what matters. You come up with a theory, get your colleagues behind it, run a program, and then determine and document how well the program performed. That’s where good business knowledge comes from. Plan, execute, evaluate, optimize–that’s the stuff, and it involves a lot of celebrated failure. It works, because it’s fast and, online at least, cheap. If your aim is making people aware that your product can fly, and your marketing plan involves a bunch of networking events and keynote presentations, then your plan might fail the “not obviously wrong” test.
2. A Master’s Student is Not a Student–They’re an Intern
I don’t know if it’s just because we’re not learning the basics anymore, or if it’s because scientist/professors are selfish, but I’m extremely suspicious that most of our projects are meant to help our professors do their jobs as researchers. If they teach us something as well, then hooray for two birds.
In “Conflict Resolution,” we’re supposed to identify a natural resource conflict, analyze it and propose a strategy for alleviating it. Sounds great, right? I still think it does. However, our professor is paying WAY more attention to those of us who chose mining-related conflicts than those who didn’t. And wouldn’t you know it, that’s the subject of her research. What a coincidence! Those poor “whaling conflict” groups.
This may be because the scientist/professor’s career is still active and tied to the university. In my business and ad programs, the professors had to be enticed away from industry jobs to finish out their working days teaching the next batch of young thems. In their case, teaching was their core function, and presumably that’s why our projects back then were extremely educational, but likely had very little use outside of our final classroom presentations (unless the subject companies were looking for ideas).
3. 80%? Are You Kidding Me?
Remember the “Fast 80” insight with which I opened this post? My worldview is informed by that insight to a large degree after Bachelor and those three years at the tech company. It makes an awful lot of sense–you’d be surprised how often you’ve done what you need to do by the time you hit the 80%-of-perfection point. In any case, I think its real value is in reminding us that it’s usually time to wrap up a project before we consider it truly perfect.
This difference is funny, because it affects the way we classmates interact with one another.
Our program probably contains equal numbers of what we’ll call “academics” and “industrials.” Industrials like me plan to continue in the business world after graduating. Academics want to keep doing research and get published. 80% is simply not enough for an academic, and trying to pitch them on the idea will drive them crazy.
An example of an industrial-academic conflict is this: some research says that plastic takes an average of 450 years to degrade naturally when thrown away. Other research says it takes 1,000 years. The industrial thinks “it takes a long time,” and just includes 450 years in their presentation to err on the conservative side. “It takes a long time” is really what the audience needs to know. The academic, on the other hand, needs to know which one is correct, or how it’s possible that both are correct. In a team’s context, as soon as the industrial settles on 450, the academic has a panic attack as visions of a declining final grade flash before their eyes, even if the decline envisioned is a 96% to a 93%.
I’m finding that this dynamic fosters extremely effective teamwork, however. In business school, everyone’s obviously an industrial. On the worst of teams, that leads to a lot of fudging–“eh, it’s close enough” adds up enough times to drive the final result so far away from insightful that the project ends up being a lazy, useless buzzword salad (startup company elevator pitches?).
Presumably, then, the worst of teams full of academics will have the opposite problem. They will have real, specific knowledge to share. They’ll have tons of it, in fact–so much that they have a hard time justifying cutting any of it out of their presentation. Collecting that knowledge took a lot of time and hard work, man! On presentation day, they give 250% of a presentation to a sleeping audience who already heard the industrials’ presentation a month ago.
You see where I’m going with this: the mixed industrial/academic team is really beneficial to both types.
So, there’s a taste of the differences I’ve experienced transitioning from a practical study program to a theoretical one, from business to science, from Bachelor to Master. Do you have similar or different experiences? Light up that comment section.