When I was a grad student many years ago, I subsidized my studies doing part-time statistical and programming consulting at the university. I'd generally be presented with data on magnetic tape to read, organize and analyze by a group conducting research. At that time, my “platform” was Fortran, PL/I and SAS on an IBM mainframe.

On two occasions I worked with students writing their Ph.D. dissertations in clinical psychology. I remember feeling sorry for their plight of having to do “empirical” studies that included statistical analysis when their interests and training couldn't have been more removed. The students were given “puzzles” to solve by their advisers and then tasked with the statistical design, programming and analyses of the data. The shocker was not that the unwitting students didn't know that stuff, but that their advisers didn't know it either. When one frustrated student's ANOVA came back empty, her adviser “ordered” a regression analysis, apparently unaware that both derived from the same linear model and hence would yield identical results. Both harried students paid me well to “fish” for the statistical significance I was almost certain not to find. As a result of these hapless exercises, I developed a healthy skepticism towards much of what I read in behavioral research.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access