Let�s face it: in most industries, firms pretty much do the same thing


In the field of strategy, we always make a big thing out of differentiation: we tell firms that they have to do something different in the market place, and offer customers a unique value proposition. Ideas around product differentiation, value innovation, and whole Blue Oceans are devoted to it. But we also can�t deny that in many industries � if not most industries � firms more or less do the same thing.

Whether you take supermarkets, investment banks, airlines, or auditors, what you get as a customer is highly similar across firms.

1. Ability to execute: What may be the case, is that despite doing pretty much the same thing, following the same strategy, there can be substantial differences between the firms in terms of their profitability. The reason can lie in execution: some firms have obtained capabilities that enable them to implement and hence profit from the strategy better than others. For example, Sainsbury�s supermarkets really aren�t all that different from Tesco�s, offering the same products at pretty much the same price in pretty much the same shape and fashion in highly identical shops with similarly tempting routes and a till at the end. But for many years, Tesco had a superior ability to organise the logistics and processes behind their supermarkets, raking up substantially higher profits in the process.

2. Shake-out: As a consequence of such capability differences � although it can be a surprisingly slow process � due to their homogeneous goods, we may see firms start to compete on price, margins decline to zero, and the least efficient firms are pushed out of the market. And one can hear a sigh of relief amongst economists: �our theory works� (not that we particularly care about the world of practice, let alone be inclined to adapt our theory to it, but it is more comforting this way).

3. A surprisingly common anomaly? But it also can�t be denied that there are industries in which firms offer pretty much the same thing, have highly similar capabilities, are not any different in their execution, and still maintain ridiculously high margins for a sustained period of time. And why is that? For example, as a customer, when you hire one of the Big Four accounting firms (PwC, Ernst & Young, KPMG, Deloitte), you really get the same stuff. They are organised pretty much the same way, they have the same type of people and cultures, and have highly similar processes in place. Yet, they also (still) make buckets of money, repeatedly turning and churning their partners into millionaires.

�But such markets shouldn�t exist!� we might cry out in despair. But they do. Even the Big Four themselves will admit � be it only in covert private conversations carefully shielding their mouths with their hands � that they are really not that different. And quite a few industries are like that. Is it a conspiracy, illegal collusion, or a business X file?

None of the above I am sure, or perhaps a bit of all of them� For one, industry norms seem to play a big role in much of it: unwritten (sometimes even unconscious), collective moral codes, sometimes even crossing the globe, in terms of how to behave and what to do when you want to be in this profession. Which includes the minimum price to charge for a surprisingly undifferentiated service.

A good fight clears the mind: On the value of staging a debate

I always enjoy witnessing a good debate. And I mean the type of debate where one person is given a thesis to defend, while the other person speaks in favour of the anti-thesis. Sometimes � when smart people really get into it � seeing two debaters line up the arguments and create the strongest possible defence can really clarify the pros and cons in my mind and hence make me understand the issue better.

For example � be it one in a written format � recently my good friend and colleague at the London Business School, Costas Markides, was asked by Business Week to debate the thesis that �happy workers will produce more and do their jobs better�. Harvard�s Teresa Amabile and Steven Kramer had the (relatively easy) task of defending the �pro�. I say relatively easy, because the thesis seems intuitively appealing, it is what we�d all like to believe, and they have actually done ample research on the topic.

My poor London Business School colleague was given the hapless task to defend the �con�: �no, happy workers don�t do any better�. Hapless indeed.

In fact, in spite of receiving some hate mail in the process, I think he did a rather good job. I am giving him the assessment �good� because indeed he made me think. He argues that having happy, smiley employees all abound might not necessarily be a good sign, because it might be a signal that something is wrong in your organisation, and you�re perhaps not making the tough but necessary choices.

As said, it made me think, and that can�t be bad. Might we not be dealing with a reversal of cause and effect here? Meaning: well-managed companies will get happy employees, but that does not mean that choosing to make your employees happy as a goal in and of itself will get you a better organisation? At least, it is worth thinking about.

In spite that perhaps to you it might seem a natural thing to have in an academic institution � a good debate � it is actually not easy to organise one in business academia. Most people are simply reluctant to do it � as I found out organising our yearly Ghoshal Conference at the London Business School � and perhaps they are right, because even fewer people are any good at it.

I guess that is because, to a professor, it feels unnatural to adopt and defend just one side of the coin, because we are trained to be nuanced about stuff and examine and see all sides of the argument. It is also true that (the more na�ve part of) the audience will start to associate you with that side of the argument, �as if you really meant it�. Many of the comments Costas received from the public were of that nature, i.e. �he is that moronic guy who thinks you should make your employees unhappy�. Which of course is not what he meant at all. Nor was it the purpose of the debate.

Yet, I also think it is difficult to find people willing to debate a business issue because academics are simply afraid to have an opinion. We are not only trained to examine and see all sides of an argument, we are also trained to not believe in something � let alone argue in favour of it � until there is research that produced supportive evidence for it. In fact, if in an academic article you would ever suggest the existence of a certain relationship without presenting evidence, you�d be in for a good bellowing and a firm rejection letter. And perhaps rightly so, because providing evidence and thus real understanding is what research is about.

But, at some point, you also have to take a stand. As a paediatric neurologist once told me, �what I do is part art, part science�. What he meant is that he knew all the research on all medications and treatments, but at the end of the day every patient is unique and he would have to make a judgement call on what exact treatment to prescribe. And doing that requires an opinion.

You don�t hear much opinion coming from the ivory tower in business academia. Which means that the average business school professor does not receive much hate mail. It also means he doesn�t have much of an audience outside of the ivory tower.

Research by Mucking About

I am a long standing fan of the Ig Nobel awards. The Ig Nobel awards are an initiative by the magazine Air (Annals of Improbable Research) and are handed out on a yearly basis � often by real Nobel Prize winners � to people whose research �makes people laugh and then think� (although its motto used to be to �honor people whose achievements cannot or should not be reproduced" � but I guess the organisers had to first experience the �then think� bit themselves).

With a few exceptions they are handed out for real research, done by academics, and published in scientific journals. Here are some of my old time favourites:
� BIOLOGY 2002, Bubier, Pexton, Bowers, and Deeming.�Courtship behaviour of ostriches towards humans under farming conditions in Britain� British Poultry Science 39(4)
� INTERDISCIPLINARY RESEARCH 2002. Karl Kruszelnicki (University of Sydney). �for performing a comprehensive survey of human belly button lint � who gets it, when, what color, and how much�
� MATHEMATICS 2002. Sreekumar and Nirmalan (Kerala Agricultural University). �Estimation of the total surface area in Indian Elephants� Veterinary Research Communications 14(1)
� TECHNOLOGY 2001, Jointly to Keogh (Hawthorn), for patenting the wheel (in 2001), and the Australian Patent Office for granting him the patent.
� PEACE 2000, the British Royal Navy, for ordering its sailors to stop using live cannon shells, and to instead just shout �Bang!�
� LITERATURE 1998, Dr. Mara Sidoli (Washington) for the report �farting as a defence against unspeakable dread�. Journal of analytical psychology 41(2)

To the best of my knowledge, there is (only) one individual who has not only won an Ig Nobel Award, but also a Nobel Prize. That person is Andre Geim. Geim � who is now at the University of Manchester � for long held the habit of dedicating a fairly substantial proportion of his time to just mucking about in his lab, trying to do �cool stuff�. In one of such sessions, together with his doctoral student Konstantin Novoselov, he used a piece of ordinary sticky tape (which allegedly they found in a bin) to peel off a very thin layer of graphite, taken from a pencil. They managed to make the layer of carbon one atom thick, inventing the material �graphene�.

In another session, together with Michael Berry from the University of Bristol, he experimented with the force of magnetism. Using a magnetized metal slab and a coil of wire in which a current is flowing as an electromagnet, they tried to make a magnetic force that exactly balanced gravity, to try and make various objects �float�. Eventually, they settled on a frog � which, like humans, mostly consists of water � and indeed managed to make it levitate.

The one project got Geim the Ig Nobel; the other one got him the Nobel Prize.

�Mucking about� was the foundation of these achievements. The vast majority of these experiments doesn�t go anywhere; some of them lead to an Ig Nobel and makes people laugh; others result in a Nobel Prize. Many of man�s great discoveries � in technology, medicine or art � have been achieved by mucking about. And many great companies were founded by mucking about, in a garage (Apple), a dorm room (Facebook), or a kitchen and a room above a bar (Xerox).

Unfortunately, in strategy research we don�t muck about much. In fact, people are actively discouraged from doing so. During pretty much any doctoral consortium, junior faculty meeting, or annual faculty review, a young academic in the field of Strategic Management is told � with ample insistence � to focus, figure out in what subfield he or she wants to be known, �who the five people are that are going to read your paper� (heard this one in a doctoral consortium myself), and �who your letter writers are going to be for tenure� (heard this one in countless meetings). The field of Strategy � or any other field within a business school for that matter � has no time and tolerance for mucking about. Disdain and a weary shaking of the head are the fates of those who try, and step off the proven path in an attempt to do something original with uncertain outcome: �he is never going to make tenure, that�s for sure�.

And perhaps that is also why we don�t have any Nobel Prizes.

�The Best Degree for Start-up Success�

�So you want to start a company. You've finished your undergraduate degree and you're peering into the haze of your future. Would it be better to continue on to an MBA or do an advanced degree in a nerdy pursuit like engineering or mathematics? Sure, tech skills are hugely in demand and there are a few high-profile nerd success stories, but how often do pencil-necked geeks really succeed in business? Aren't polished, suited and suave MBA-types more common at the top? Not according to a recent white paper from Identified, tellingly entitled "Revenge of the Nerds."

Interested? Yes, it does sound intriguing, doesn�t it? It is the start of an article, written by a journalist, based on a report by a company called �Identified�. In the report, you can find that �Identified is the largest database of professional information on Facebook. Our database includes over 50 million Facebook users and over 1.2 billion data points on professionals� work history, education and demographic data�.

In the report, based on the analysis of data obtained from Facebook, under the header �the best degree for start-up success�, Identified presents some �definitive conclusions� about �whether an MBA is worth the investment and if it really gets you to the top of the corporate food chain�. Let me no longer hold you in suspense (although I think by now you do see this one coming from a mile or two, like a Harry and Sally romance), the definitive conclusion is: �that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day�.

So I have read the report�

[insert deep sigh]

and � how shall I put it � I have a few doubts� ( = polite English euphemism)

Although Identified has �assembled a world class team of 15 engineers and data scientists to analyse this vast database and identify interesting trends, patterns and correlations� I am not entirely sure that they are not jumping to a few unwarranted conclusions. ( = polite English euphemism)

So, when they dig up from Facebook all the profiles of anyone listed as �CEO� or �founder�, they find that about � are engineers and a mere � are MBAs. (Actually, they don�t even find that, but let me not get distracted here). I have no quibbles with that; I am sure they do find what they find; after all, they do have �a world class team of 15 engineers and data scientists�, and a fact is a fact. What I have more quibbles with is how you get from that to the conclusion that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day.

Perhaps it may seem obvious and a legitimate conclusion to you: more CEOs have an engineering degree than an MBA, so surely getting an engineering degree is more likely to enable you to become a CEO? But, no, that is where it goes wrong; you cannot draw this conclusion from those data. Perhaps �a world class team of 15 engineers and data scientists [able] to analyse this vast database and identify interesting trends, patterns and correlations� are superbly able at digging up the data for you but, apparently, they are less skilled in drawing justifiable conclusions. (I am tempted to suggest that, for this, they would have been better off hiring an MBA, but will fiercely resist that temptation!)

The problem is, what we call, �unobserved heterogeneity�, coupled with some �selection bias�, finished with some �bollocks� (one of which is not a generally accepted statistical term) � and in this case there is lots of it. For example � to start with a simple one � perhaps there are simply a lot more engineers trying to start a company than MBAs. If there are 20 engineers trying to start a company and 9 of them succeed, while there are 5 MBAs trying it and 3 of them succeed, can you really conclude that an engineering degree is better for start-up success than an MBA?

But, you may object, why would there be more engineers who are trying to start a business? Alright then, since you insist, suppose out of the 10 engineers 9 succeed and out of the 10 MBAs only 3 do, but the 9 head $100,000 businesses and the three $100 million ones? Still so sure that an engineering degree is more useful to �get you to the top of the corporate food chain�? What about if the MBA companies have all been in existence for 15 years while all the engineering start-ups never make it past year 2?

And these are of course only very crude examples. There are likely more subtle processes going on as well. For instance, the same type of qualities that might make someone choose to do an engineering degree could prompt him or her to start a company, however, this same person might have been better off (in terms of being able to make the start-up a success) if s/he had done an MBA. And if you buy none of the above (because you are an engineer or about to be engaged to one) what about the following: people who chose to do an engineering degree are inherently smarter and more able people than MBAs, hence they start more and more successful companies. However, that still leaves wide open the possibility that such a very smart and able person would have been even more successful had s/he chosen to do an MBA before venturing.

I could go on for a while (and frankly I will) but I realise that none of my aforementioned scenarios will be the right one, yet the point is that there might very well be a bit going on of several of them. You cannot compare the ventures started by engineers with the ventures headed by MBAs, you can�t compare the two sets of people, you can�t conclude that engineers are more successful founding companies, and you certainly cannot conclude that getting an engineering degree makes you more likely to succeed in starting a business. So, what can you conclude from the finding that more CEOs/founders have a degree in engineering than an MBA? Well� precisely that; that more CEOs/founders have a degree in engineering than an MBA. And, I am sorry, not much else.

Real research (into such complex questions such as �what degree is most likely to lead to start-up success?) is more complex. And so will likely have to be the answer. For some type of businesses an MBA might be better, and for others an engineering degree. And some type of people might be more helped with an MBA, where other types are better off with an engineering degree. There is nothing wrong with deriving some interesting statistics from a database, but you have to be modest and honest about the conclusions you can link to them. It may sound more interesting if you claim that you find a definitive conclusion about what degree leads to start-up success � and it certainly will be more eagerly repeated by journalist and in subsequent tweets (as happened in this case) � but I am afraid that does not make it so.

Fraud and the Road to Abilene

Over the weekend, an (anonymized) interview was published in a Dutch national newspaper with the three �whistle blowers� who exposed the enormous fraud of Professor Diederik Stapel. Stapel had gained stardom status in the field of social psychology but, simply speaking, had been making up all his data all the time. There are two things that struck me:

First, in a previous post I wrote about the fraud, based on a flurry of newspaper articles and the interim report that a committee examining the fraud has put together, I wrote that it eventually was his clumsiness faking the data that got him caught. Although that general picture certainly remained � he wasn�t very good at faking data; I think I could have easily done a better job (although I have never even tried anything like that, honest!) � but it wasn�t as clumsy as the newspapers sometimes made it out to be.

Specifically, I wrote �eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap�. Now, it now seems the �substantial overlap� was merely a part of one column of data. Plus, there were various other things that got him caught.

I don�t beat myself too hard over the head with my keyboard about repeating this misrepresentation by the newspapers (although I have given myself a small slap on the wrist � after having received a verbal one from one of the whistlers) because my piece focused on the �why did he do it?� rather than the �how did he get caught�, but it does show that we have to give the three whistle blowers (quite) a bit more credit than I � and others � originally thought.

The second point that caught my attention is that, since the fraught was exposed, various people have come out admitting that they had �had suspicions all the time�. You could say �yeah right� but there do appear to be quite a few signs that various people indeed had been having their doubts for a longer time. For instance, I have read an interview with a former colleague of Stapel at Tilburg University credibly admitting to this, I have directly spoken to people who said there had been rumors for longer, and the article with the whistle blowers suggests even Stapel�s faculty dean might not have been entirely dumbfounded that it had all been too good to be true after all... All the people who admit to having doubts in private state that they did not feel comfortable raising the issue while everyone just seemed to applaud Stapel and his Science publications.

This reminded me of the Abilene Paradox, first described by Professor Jerry Harvey, from the George Washington University. He described a leisure trip which he and his wife and parents made in Texas in July, in his parents� un-airconditioned old Buick to a town called Abilene. It was a trip they had all agreed to � or at least not disagreed with � but, as it later turned out, none of them had wanted to go on. �Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnace-like temperature through a cloud-like dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go�

The Abilene Paradox describes the situation where everyone goes along with something, mistakenly assuming that others� people�s silence implies that they agree. And the (erroneous) feeling to be the only one who disagrees makes a person shut up as well, all the way to Abilene.

People had suspicions about Stapel�s �too good to be true� research record and findings but did not dare to speak up while no-one else did.

It seems there are two things that eventually made the three whistle blowers speak up and expose Stapel: Friendship and alcohol.

They had struck up a friendship and one night, fuelled by alcohol, raised their suspicions to one another. And, crucially, decided to do something about it. Perhaps there are some lessons in this for the world of business. For example, Jim Westphal, who has done extensive, thorough research on boards of directors, showed that boards often suffer from the Abilene Paradox, for instance when confronted with their company�s new strategy. Yet, Jim and colleagues also showed that friendship ties within top management teams might not be such a bad thing. We are often suspicious of social ties between boards and top managers, fearful that it might cloud their judgment and make them reluctant to discipline a CEO. But it may be that such friendship ties � whether fuelled by alcohol or not � might also help to lower the barriers to resolving the Abilene Paradox. So perhaps we should make friendships and alcohol mandatory � religion permitting � both during board meetings and academic gatherings. It would undoubtedly help making them more tolerable as well.

Bias (or why you can�t trust any of the research you read)

Researchers in Management and Strategy worry a lot about bias � statistical bias. In case you�re not such an academic researcher, let me briefly explain.

Suppose you want to find out how many members of a rugby club have their nipples pierced (to pick a random example). The problem is, the club has 200 members and you don�t want to ask them all to take their shirts off. Therefore, you select a sample of 20 of them guys and ask them to bare their chests. After some friendly bantering they agree, and then it appears that no fewer than 15 of them have their nipples pierced, so you conclude that the majority of players in the club likely have undergone the slightly painful (or so I am told) aesthetic enhancement.

The problem is, there is a chance that you�re wrong. There is a chance that due to sheer coincidence you happened to select 15 pierced pairs of nipples where among the full set of 200 members they are very much the minority. For example, if in reality out of the 200 rugby blokes only 30 have their nipples pierced, due to sheer chance you could happen to pick 15 of them in your sample of 20, and your conclusion that �the majority of players in this club has them� is wrong.

Now, in our research, there is no real way around this. Therefore, the convention among academic researchers is that it is ok, and you can claim your conclusion based on only a sample of observations, as long as the probability that you are wrong is no bigger than 5%. If it ain�t � and one can relatively easily compute that probability � we say the result is �statistically significant�. Out of sheer joy, we then mark that number with a cheerful asterisk * and say amen.

Now, I just said that �one can relatively easily compute that probability� but that is not always entirely true. In fact, over the years statisticians have come up with increasingly complex procedures to correct for all sorts of potential statistical biases that can occur in research projects of various natures. They treat horrifying statistical conditions such as unobserved heterogeneity, selection bias, heteroscedasticity, and autocorrelation. Let me not try to explain to you what they are, but believe me they�re nasty. You don�t want to be caught with one of those.

Fortunately, the life of the researcher is made easy by standard statistical software packages. They offer nice user-friendly menus where one can press buttons to solve problems. For example, if you have identified a heteroscedasticity problem in your data, there are various buttons to press that can cure it for you. Now, note that it is my personal estimate (but notice, no claims of an asterisk!) that about 95 out of a 100 researchers have no clue what happens within their computers when they press one of those magical buttons, but that does not mean it does not solve the problem. Professional statisticians will frown and smirk at the thought alone, but if you have correctly identified the condition and the way to treat it, you don�t necessarily have to fully understand how the cure works (although I think it often would help selecting the correct treatment). So far, so good.

Here comes the trick: All of those statistical biases are pretty much irrelevant. They are irrelevant because they are all dwarfed by another bias (for which there is no life-saving cure available in any of the statistical packages): publication bias.

The problem is that if you have collected a whole bunch of data and you don�t find anything or at least nothing really interesting and new, no journal is going to publish it. For example, the prestigious journal Administrative Science Quarterly proclaims in its �Invitation to Contributors� that it seeks to publish �counterintuitive work that disconfirms prevailing assumptions�. And perhaps rightly so; we�re all interested in learning something new. So if you, as a researcher, don�t find anything counterintuitive that disconfirms prevailing assumptions, you are usually not even going to bother writing it up. And in case you�re dumb enough to write it up and send it to a journal requesting them to publish it, you will swiftly (or less swiftly, dependent on what journal you sent it to) receive a reply that has the word �reject� firmly embedded in it.

Yet, unintended, this publication reality completely messes up the �5% convention�, i.e. that you can only claim a finding as real if there is only a 5% chance that what you found is sheer coincidence (rather than a counterintuitive insight that disconfirms prevailing assumptions). In fact, the chance that what you are reporting is bogus is much higher than the 5% you so cheerfully claimed with your poignant asterisk. Because journals will only publish novel, interesting findings � and therefore researchers only bother to write up seemingly intriguing counterintuitive findings � the chance that what they eventually are publishing is BS unwittingly is vast.

A recent article by Simmons, Nelson, and Simonsohn in Psychological Science (cheerfully entitled �False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant�) summed it up prickly clearly. If a researcher, running a particular experiment, does not find the result he was expecting, he may initially think �that�s because I did not collect enough data� and collect some more. He can also think �I used the wrong measure; let me use the other measure I also collected� or �I need to correct my models for whether the respondent was male or female� or �examine a slightly different set of conditions�. Yet, taking these (extremely common) measures raises the probability that what the researcher finds in his data is due to sheer chance from the conventional 5% to a whopping 60.7%, without the researcher realising it. He will still cheerfully put the all-important asterisk in his table and declare that he has found a counterintuitive insight that disconfirms some important prevailing assumption.

In management and strategy research we do highly similar things. We for instance collect data with two or three ideas in mind in terms of what we want to examine and test with them. If the first idea does not lead to a desired result, the researcher moves on to his second idea and then one can hear a sigh of relief behind a computer screen that �at least this idea was a good one�. In fact, you might only be moving on to �the next good idea� till you have hit on a purely coincidental result: 15 bulky guys with pierced nipples.

Things get really �funny� when one realises that what is considered interesting and publishable is different in different fields in Business Studies. For example, in fields like Finance and Economics, academics are likely to be fairly skeptical whether Corporate Social Responsibility is good for a firm�s financial performance. In the subfield of Management people are much more receptive to the idea that Corporate Social Responsibility should also benefit a firm in terms of its profitability. Indeed, as shown by a simple yet nifty study by Marc Orlitzky, recently published in Business Ethics Quarterly, articles published on this topic in Management journals report a statistical relationship between the two variables which is about twice as big as the ones reported in Economics, Finance, or Accounting journals. Of course, who does the research and where it gets printed should not have any bearing on what the actual relationship is but, apparently, preferences and publication bias do come into the picture with quite some force.

Hence, publication bias vastly dominates any of the statistical biases we get so worked up about, making them pretty much irrelevant. Is this a sad state of affairs? Ehm�. I think yes. Is there an easy solution for it? Ehm� I think no. And that is why we will likely all be suffering from publication bias for quite some time to come.

Do you have "Text Neck?"

I think I do. Not from texting, as I don't do much of that, but from the computer, writing, and using a calculator constantly. After reading about Text Neck over at Instapundit, I self-diagnosed myself and decided this morning to spend a few minutes doing neck stretches and strengthening exercises from a great book I use called 8 Steps to a Pain-Free Back: Natural Posture Solutions for Pain in the Back, Neck, Shoulder, Hip, Knee, and Foot. If you have problems in any of these areas, try this book, it has helped me a lot.

Anyway, according to an article on this disorder over at the Daily Mail:
'Our muscles are designed to flex and retract,' he told Mail Online.
'If you stay in a fixed posture for too long like peering over a phone you are putting those muscles under stress.'
Mr Hutchful said leaning the head forwards was like holding a 10 to 12lb weight away from the body.
'Muscles will go into spasm if they have to hold such a position,' he said.
He added that tall young women with slender necks were anatomically most at risk from neck problems, as were sedentary people not used to using different muscles.

The article has several tips from a chiropractor to combat text neck such as taking frequent breaks and rotating your shoulders forward to increase blood flow.

Do you have text neck? How do you combat it?

Why was Sharon Bialek Fired?

After reading the news and watching the videos of Sharon Bialek, the Herman Cain accuser, I noticed that many of the articles mentioned that Bialek was fired or let go " from the NRA’s educational foundation." However, none of them mentioned why. Being let go or fired is more serious than simply being laid off. I wonder why there is no mention of the reason she was fired? This might give more insight into her character and whether she is a trustworthy person or not. Why was she fired? Does anyone know or has anyone seen a report as to what happened with her job in the summer of 1997? Here is more from the New York Times:
Ms. Bialek said she first met Mr. Cain during her time at the association’s Chicago office, when he sat next to her at a dinner during one of the group’s conventions. He later invited her and her boyfriend to an after-party in his hotel suite, she said.

But the alleged harassment did not occur until after she was fired, she said. Sue Hensley, a spokeswoman with the restaurant association confirmed that “Sharon Bialek was employed by the National Restaurant Association Educational Foundation from 12/30/96 – 6/20/97.”


She didn't work there long, I wonder what happened?