K-Lab Update Summer 02 DC

La Tech IO PhD Summer 21 Graduates

It’s been an interesting and hot summer. First and foremost, the program graduated seven Ph.D.s!!

  • CAI, QIN
  • D’ILIO, TAYLOR ANNE
  • MCDONALD, DERRICK
  • PATEL, VINAY aka SWADEEP
  • REINECKE, OLIVIA
  • VOSBURG, MATTHEW
  • WALTER, MARLEY

Woo hoo! Congratulations Doctors!

They all now understand my mantra “It’s not over just because you successfully defended.” University red tape at its finest.

A heartwarming event, especially having the opportunity to hood three of my former RAs (Olivia, Vinay, and Derrick).

Olivia’s dissertation looked at real-world example of organizational decision-making through the lens of statistical analysis using confidence intervals. It’s an interesting take on considering risks associated with decisions based on organizational data. The primary message: statistical significance shouldn’t be the focus of many organizational decisions. Instead of using point estimates, business leaders should be looking at the range of theoretical estimates (i.e., confidence intervals) related to desired outcomes (e.g., dollars) to determine the degree of risk that is acceptable. Of course, many researchers are making similar arguments in regard to scientific inquiry. However, Olivia found very little in the way of guidance for using such an approach in the literature. Most management textbooks only mention significant testing as a statistical tool. Secondary message: Stop categorizing linear data!

Both Derrick and Vinay did pioneering research in the assessment of cognitive abilities. As is the case of most pioneers, there were some major obstacles and tribulations. Derrick looked to see if there was evidence that practice on the KOTA would lead to less discrimination in a selection scenario. While Vinay examined the feasibility of using occlusion in a Shepard-Metzler test. Neither obtained statistical evidence to support their hypotheses. Looking at the data from both studies, there were a large number of rapid test-takers in their samples. Vinay used an online source (MTurk) and Derrick used a mixed snowball-sample of participants recruited via social media and participants from an online source (MTurk). I hope to test my theory that online samples are not a good idea for assessing cognitive ability once COVID settles down. Until then, I’ll continue to slog onward with developing the OAIG project.


Did you find this page helpful? Consider sharing it 🙌

Tilman Sheets
Tilman Sheets
Professor of Psychology

My research interests include automated item generation and assessment of cognitive abilities.