5
Dec

Teaching statistics tip: Know your students

Almost always when I get asked to teach anything my answer is:

No. 

I don’t even think about it . Just, no. I’m too busy.  Usually, I’ll teach one graduate class a year and that’s it. However, recently I had the opportunity to teach an introduction to statistics course and design the whole course from the ground up, which sounded like my idea of fun. The college is predominantly an arts school, with students majoring in screenwriting, dance, drama and a smattering of entertainment business majors.

Normally, when I teach graduate statistics courses I use SAS, I require students to learn at least a minimal amount of programming and be able to do things like partition the sums of squares.

Julia in Trinidad

The Spoiled One NOT computing the area under the curve

It just so happens that The Spoiled One, who is a Creative Writing major (what does she want to be when she graduates? Unemployed, apparently) took statistics last year, which resulted in many 11 pm (2 am Eastern time where she attends school) phone calls to me on things like how to compute the area under the curve between two z-scores.

Despite my best efforts, I believe she left the class with zero conviction that she would ever use statistics, and I really don’t blame her. There is not a lot of call in one’s daily life for looking up values in a z table, it being the 21st century and all and us having computers.

Here is my honest appraisal of my soon-to-be students – nearly 100% of them will be able to use skills such as creating graphs with Excel, computing averages, understanding the difference between the median and the mean and when which measure is appropriate. I can tell them truly how they could use this information in deciding which contract to accept, in which film to invest and whether a particular dance studio is preferable to another in terms of business viability. There is less than a 10% chance that as juniors and seniors in an arts college they are going to change their minds and decide they want to go into a research career. If they do make that choice, everything they learn in this course will apply.  What I did not do was include a lot of proofs and matrix algebra or computation.

I gave some thought to using JMP because of the graphics, and to SAS Studio, because it is available free and we could use the tasks menu, which is pretty cool, but the fact is these students are most likely familiar with Excel and the campus already has a license. It’s installed on every computer in the lab. Installing the analysis toolpak is super-easy, whether you are using Office 365 or the regular Office (I hear some people calling that the productivity suite).

So, if I am not having students use SAS or calculate the area under a curve, what am I doing?

One thing I am requiring is that every student create their own livebinder. You’re welcome to take a look at it in the livebinder I’m preparing for my own purposes for the course. Just look under the livebinder assignment tab.

I have a lot more to write about this later. Right now,  I have guests on the way so I’ll try to post more tomorrow.


Want to learn statistics in a game?  Play Aztech: The Story Begins – free for your iPad in the app store, in Spanish and English.  The first in our series of bilingual games teaching math and history.

20
Nov

The statistical knowledge you need the most – almost everywhere

deer in back yardWhat do a herd of deer and a sea lion have to do with statistics?

Friday, I was on the Spirit Lake Dakota Nation in North Dakota. Most of the time while I was there, I spent at the Spirit Lake Vocational Rehabilitation Project, an impressively effective group of people who help tribal members with disabilities get and keep jobs. A few years back, I wrote a system to track their data using PHP and MySQL. It is deliberately simple because they wanted a basic database that would give reports on the number of people served, how many had jobs, and some demographic information. A research project used SAS to analyze the data to try to identify predictors of employment.

Due to a delayed flight, I spent the night with my friend in Minot, discussing, among other things, the decline in native speakers of Cree, and not the herd of deer in her backyard, which was common place enough to pass without comment.

harbor at night

Saturday, I was back home in California, on a dinner cruise in Marina del Rey. We were discussing how to analyze the data on persistence in our games to show that the re-design, with a longer lead-in story line and a higher proportion of game play early on was effective. I suggested maybe we could use survival analysis. Really, it’s the same scenario as how many people are alive after 2, 3 or 4 months or how many people kept playing the game after the 2nd, 3rd or 4th problem.sea lion on dock

The deer, the large loud sea lion on the dock and I spent the exact same amount of time discussing the probability mass function for a Poisson distribution and proving the Central Limit Theorem.

My point is, that everywhere I go, and that is a REALLY broad range of places, people are interested in the application of statistics, but SO much of school is focused on teaching how to compute the area under the normal curve or how to prove some theorem or computing coefficients using a calculator and plugging numbers into a formula, inverting matrices. I’m not sure how helpful that was to a student and I can guarantee you that the last time I computed the sums of squares without using a computer was about 35 years ago.

Whether you are are using SAS, SPSS, Excel, R, JMP or any one of a dozen other statistical packages, it lets you focus on what’s really important. Does age actually predict whether or not someone is employed (in this case, no)? Do rural school districts have fewer bureaucratic barriers ? Is this a reliable test? Did students who played these games improve their math scores?

When I was young, and many of the current statistical packages were either very new and limited or didn’t yet exist, someone asked me if I was worried that I would be out of a job. I laughed and said no, because what computers were replacing was the computational part of statistics, and except for that tiny proportion of people who were going to be developing new statistics, the jobs were all going to be in applying formula, not proving them and sure as hell not computing them with a pencil and a piece of paper. A computer allows you to focus on what’s important.

What IS important? That’s a good question and another post.


Having trouble teaching basic statistics to students? Start with Aztech: The Story Begins  — free from the app store (and it’s bilingual)

girl in jungle

5
Nov

Forget your dream, kid. Follow statistics

basketball

I saw this poster in a high school, supposedly said by a basketball coach:

People say, “Follow your dreams. ” I say, “Forget your dreams, kid, follow math.”

He goes on to give the percentage of high school athletes who compete in college – 3.4% for men’s basketball, by the way, 1% of high school athletes make it in Division I. Even if you make it to the college level, your odds of becoming a professional athlete are dismal – 1.1% of college basketball players make it to the major professional teams, yes, that is 1% of 1%, so you have a .01% chance of making it into the Lakers even if you are playing in high school.

If you are that 1 in 10,000 who makes it on the roster, your median salary will be $3.7 million and you will play for around 4.8 years, giving you a career salary of around $18.5 million.

Let’s say you are a statistician with a Ph.D.  With 5-9 years of experience, your median salary is around $130,000. In my experience, it is going to be considerably less your first year but go up fairly rapidly. Let’s say you have the sense to get some scholarship and grant funds to pay for your tuition – my total student loan debt was $900 – and that you graduate in your 30s – I was 31 and that was with taking a few years out to work as an engineer. There isn’t any particular reason you have to retire before 65 or 70. It’s not like your knees go out and they fire you from your statistician job.  I’m going to give a ballpark figure of $150,000 a year average over those 36 years, which is turns out to be about the median salary for a statistician who doesn’t work in academia, according to the American Statistical Association. You’re at $5.4 million. That’s not counting 36 years of health insurance, 401 K and other benefits like not having a boss who is referred to as your “owner” , which I personally find kind of creepy weird, but you also have to consider you don’t get all the $5.4 million at once, either.

So, let’s present this to you:

  • You have a 1 in 10,000 chance of making $18.5 million
  • You have a 55 out of 100 chance of making $5.4 million.

You can only buy one ticket. Which lottery ticket do you buy?

Oh, by the way, did I mention you have a 90 out of 100 chance of making over $3 million ?

The coach’s point was that you may be dreaming about a spot in the NBA but you have a much greater chance of success in life if you spend your time in the math class instead of on the court. As a good friend of mine often says, “Too many people confuse wishes with plans.”

So, you may dream of slam dunks in the NBA but you would be a lot better off planning to take Calculus, several statistics courses and study a field like business, psychology, political science or epidemiology where you can apply those statistics.

You might think I don’t have any heart, that I have no idea what it means to dream of being a successful athlete. Actually, you’d be wrong. I ran track in college. I won the world championships in judo. Then, the next year, I went into a Ph.D. program and specialized in statistics because, well, I’m good enough at math to see what had the better probability of paying off in the future.

There are SO many ways to learn and use statistics. That’s another post, though. I’d best toddle off to bed since I need to catch a plane tomorrow after I go do a charity walk in the morning.

Early morning and snow, two things I hate the most. Well, life can’t be perfect all the time. I think I can prove that statistically.

Get started learning statistics with the Aztech Games series for iPad. The first game is available now and it’s free!

pyramid

 

14
Sep

It only seems like this has nothing to do with statistics

Last post, I talked about bricolage, the fine art of throwing random stuff together to make something useful. This is something of a philosophy of life for me.

Seems rambling but it’s not …

Over 30 years ago, I was the first American to win the world judo championships. A few years ago, I co-authored a book on judo, called Winning on the Ground. 

Winning on the ground cover

When it came to judo, although I was better than the average person, I was not the best at the fancy throws – not by a long shot. I didn’t invent any new judo techniques.  I wanted to call our book The Lego Theory of Judo but my co-author said, “That’s stupid” and the editor, more tactfully, said, “Nobody will know what you are talking about unless they read the book and you want a title that will get them to buy the book”. So, I lost that argument.

What I was really good at was putting techniques together. I could go from a throw to a pin to an armbar and voila – world champion! Well, it took a long time and a lot of work, too.

How does this apply to statistics?

Let’s start with Fisher’s exact test. Last year, I wrote about using this test to compare the bureaucratic barriers to new educational technology in rural versus urban school districts. Just in case you have not memorized my blog posts, Fisher’s exact test can be used when you have a 2 x 2 matrix that fails to meet the chi-square minimum of five observations per cell. In that instance, with only 17 districts, chi-square would not be appropriate. If you have a 2 x 2 table, SAS automatically computes the Fisher exact test, as well as several others. Here is the code:

PROC FREQ DATA = install ;
TABLES rural*install / CHISQ ;

Ten years ago, I was using this exact test in a very different context, as a statistical consultant working with a group of physicians who wanted to compare the mortality rates between  a department that had staff with a specific training program and a similar department where physicians were equally qualified except for participation in the specialized program. Fortunately for the patients but unfortunately for statistical analysis purposes, people didn’t die that often in either department. Exact same problem. Exact same code except for changing the variable names and data set name.

In 35 years, I have gone from using SAS to predict which missiles will fail at launch to which families will place their child with a disability in a residential facility to which patient in a hospital will die to which person in vocational program will get employed to which student will quit playing an educational game. ALL of these applications have used statistics and in some cases, like the examples above, the identical statistics applied in very diverse fields.

Where do the Legos come in?

In pretty much every field, you need four building blocks; statistics, foundational programming concepts, an understanding of data management and subject specific knowledge. SAS can help you with three of these and if you acquire the fourth, you can build just about anything.

More on those building blocks next post.

Support my day job AND get smarter. Buy Fish Lake for Mac or Windows. Brush up on math skills and canoe the rapids.

girl in canoe

For random advice from me and my lovely children, subscribe to our youtube channel 7GenGames TV

 

9
Jul

Why I still teach

Once every year, I teach an actual course, not a workshop or professional development, but a class with 20 – 40 students. One where I need to write a syllabus, have lectures, papers to grade, homework and exams.

Now, I’m not comparing teaching masters or doctoral students 3- 6 hours a week to my friends who teach middle school six hours a day. In fact, when I go for a day or two, as a guest speaker for six classes a day, and I need to stand on my feet and keep 40 teenagers’ attention for all of that time, I think yet again that teachers don’t get paid nearly enough.

There are several reasons that it is important to me to teach a course every year, and one is that I think it is super-important as someone who makes educational technology that I be in an actual classroom with students. It’s easy to forget how unbelievably BUSY teachers are if you are not in that situation day after day.

It’s also easy to overestimate the amount of time teachers have to investigate new technology. For example, for the course I am just finishing, I considered just two possible types of statistical software – SAS and SPSS.  The university had a license for one and it was available free (through SAS Studio) for the other. I knew R existed, of course,  but I did not consider it as an option for these students (long story I will skip). I had a short time to decide and someone suggested to me another option – JMP – that I had not considered, but by then I didn’t have time to research it, find a possible textbook and integrate it in my syllabus and lectures. If I’d had more time to look into it, that might have been a good choice.

I know there are other options out there- I had looked at Statistica at one point and it looked pretty cool. However, now that I have my syllabus done, lectures written, textbooks selected, model assignments and my students are generally doing pretty well, it is hard to see myself spending a lot of time researching new software applications for my engineering students.  (Social Science and Education might be a different issue).

My point is that one evident challenge for anyone who makes educational technology is the “good enough” problem. That is, if things are going good enough, teachers are not highly motivated to look for something better.

One of the things that drives me crazy, is those teachers who think it’s “good enough” when the vast majority of their students are below grade level or not proficient – but that’s a rant for another day.

(If you’re fascinated by this topic – and who wouldn’t be – I wrote more about why teaching helps me run an ed tech startup on my other blog over on the 7 Generation  Games site)

 

When I am not writing about statistics, I’m making games that teach math, social studies and language.

Check them out.

screen shots from our games

19
Jun

Data Science Tool Market Share Leading Indicator: Scholarly Articles

Below is the latest update to The Popularity of Data Science Software. It contains an analysis of the tools used in the most recent complete year of scholarly articles. The section is also integrated into the main paper itself.

New software covered includes: Amazon Machine Learning, Apache Mahout, Apache MXNet, Caffe, Dataiku, DataRobot, Domino Data Labs, IBM Watson, Pentaho, and Google’s TensorFlow.

Software dropped includes: Infocentricity (acquired by FICO), SAP KXEN (tiny usage), Tableau, and Tibco. The latter two didn’t fit in with the others due to their limited selection of advanced analytic methods.

Scholarly Articles

Scholarly articles provide a rich source of information about data science tools. Their creation requires significant amounts of effort, much more than is required to respond to a survey of tool usage. The more popular a software package is, the more likely it will appear in scholarly publications as an analysis tool, or even an object of study.

Since graduate students do the great majority of analysis in such articles, the software used can be a leading indicator of where things are headed. Google Scholar offers a way to measure such activity. However, no search of this magnitude is perfect; each will include some irrelevant articles and reject some relevant ones. Searching through concise job requirements (see previous section) is easier than searching through scholarly articles; however only software that has advanced analytical capabilities can be studied using this approach. The details of the search terms I used are complex enough to move to a companion article, How to Search For Data Science Articles.  Since Google regularly improves its search algorithm, each year I re-collect the data for the previous years.

Figure 2a shows the number of articles found for the more popular software packages (those with at least 750 articles) in the most recent complete year, 2016. To allow ample time for publication, insertion into online databases, and indexing, the was data collected on 6/8/2017.

SPSS is by far the most dominant package, as it has been for over 15 years. This may be due to its balance between power and ease-of-use. R is in second place with around half as many articles. SAS is in third place, still maintaining a substantial lead over Stata, MATLAB, and GraphPad Prism, which are nearly tied. This is the first year that I’ve tracked Prism, a package that emphasizes graphics but also includes statistical analysis capabilities. It is particularly popular in the medical research community where it is appreciated for its ease of use. However, it offers far fewer analytic methods than the other software at this level of popularity.

Note that the general-purpose languages: C, C++, C#, FORTRAN, MATLAB, Java, and Python are included only when found in combination with data science terms, so view those counts as more of an approximation than the rest.

Figure 2a. Number of scholarly articles found in the most recent complete year (2016) for the more popular data science software. To be included, software must be used in at least 750 scholarly articles.

The next group of packages goes from Apache Hadoop through Python, Statistica, Java, and Minitab, slowly declining as they go.

Both Systat and JMP are packages that have been on the market for many years, but which have never made it into the “big leagues.”

From C through KNIME, the counts appear to be near zero, but keep in mind that each are used in at least 750 journal articles. However, compared to the 86,500 that used SPSS, they’re a drop in the bucket.

Toward the bottom of Fig. 2a are two similar packages, the open source Caffe and Google’s Tensorflow. These two focus on “deep learning” algorithms, an area that is fairly new (at least the term is) and growing rapidly.

The last two packages in Fig 2a are RapidMiner and KNIME. It has been quite interesting to watch the competition between them unfold for the past several years. They are both workflow-driven tools with very similar capabilities. The IT advisory firms Gartner and Forester rate them as tools able to hold their own against the commercial titans, SPSS and SAS. Given that SPSS has roughly 75 times the usage in academia, that seems like quite a stretch. However, as we will soon see, usage of these newcomers are growing, while use of the older packages is shrinking quite rapidly. This plot shows RapidMiner with nearly twice the usage of KNIME, despite the fact that KNIME has a much more open source model.

Figure 2b shows the results for software used in fewer than 750 articles in 2016. This change in scale allows room for the “bars” to spread out, letting us make comparisons more effectively. This plot contains some fairly new software whose use is low but growing rapidly, such as Alteryx, Azure Machine Learning, H2O, Apache MXNet, Amazon Machine Learning, Scala, and Julia. It also contains some software that is either has either declined from one-time greatness, such as BMDP, or which is stagnating at the bottom, such as Lavastorm, Megaputer, NCSS, SAS Enterprise Miner, and SPSS Modeler.

Figure 2b. The number of scholarly articles for the less popular data science (those used by fewer than 750 scholarly articles in 2016.

While Figures 2a and 2b are useful for studying market share as it stands now, they don’t show how things are changing. It would be ideal to have long-term growth trend graphs for each of the analytics packages, but collecting that much data annually is too time consuming. What I’ve done instead is collect data only for the past two complete years, 2015 and 2016. This provides the data needed to study year-over-year changes.

Figure 2c shows the percent change across those years, with the “hot” packages whose use is growing shown in red (right side); those whose use is declining or “cooling” are shown in blue (left side). Since the number of articles tends to be in the thousands or tens of thousands, I have removed any software that had fewer than 500 articles in 2015. A package that grows from 1 article to 5 may demonstrate 500% growth, but is still of little interest.

 

Figure 2c. Change in the number of scholarly articles using each software in the most recent two complete years (2015 to 2016). Packages shown in red are “hot” and growing, while those shown in blue are “cooling down” or declining.

Caffe is the data science tool with the fastest growth, at just over 150%. This reflects the rapid growth in the use of deep learning models in the past few years. The similar products Apache MXNet and H2O also grew rapidly, but they were starting from a mere 12 and 31 articles respectively, and so are not shown.

IBM Watson grew 91%, which came as a surprise to me as I’m not quite sure what it does or how it does it, despite having read several of IBM’s descriptions about it. It’s awesome at Jeopardy though!

While R’s growth was a “mere” 14.7%, it was already so widely used that the percent translates into a very substantial count of 5,300 additional articles.

In the RapidMiner vs. KNIME contest, we saw previously that RapidMiner was ahead. From this plot we also see that it’s continuing to pull away from KNIME with quicker growth.

From Minitab on down, the software is losing market share, at least in academia. The variants of C and Java are probably losing out a bit to competition from several different types of software at once.

In just the past few years, Statistica was sold by Statsoft to Dell, then Quest Software, then Francisco Partners, then Tibco! Did its declining usage drive those sales? Did the game of musical chairs scare off potential users? If you’ve got an opinion, please comment below or send me an email.

The biggest losers are SPSS and SAS, both of which declined in use by 25% or more. Recall that Fig. 2a shows that despite recent years of decline, SPSS is still extremely dominant for scholarly use.

I’m particularly interested in the long-term trends of the classic statistics packages. So in Figure 2d I have plotted the same scholarly-use data for 1995 through 2016.

Figure 2d. The number of scholarly articles found in each year by Google Scholar. Only the top six “classic” statistics packages are shown.

As in Figure 2a, SPSS has a clear lead overall, but now you can see that its dominance peaked in 2009 and its use is in sharp decline. SAS never came close to SPSS’ level of dominance, and its use peaked around 2010. GraphPAD Prism followed a similar pattern, though it peaked a bit later, around 2013.

Note that the decline in the number of articles that used SPSS, SAS, or Prism is not balanced by the increase in the other software shown in this particular graph. Even adding up all the other software shown in Figures 2a and 2b doesn’t account for the overall decline. However, I’m looking at only 46 out of over 100 data science tools. SQL and Microsoft Excel could be taking up some of the slack, but it is extremely difficult to focus Google Scholar’s search on articles that used either of those two specifically for data analysis.

Since SAS and SPSS dominate the vertical space in Figure 2d by such a wide margin, I removed those two curves, leaving only two points of SAS usage in 2015 and 2016. The result is shown in Figure 2e.

 

Figure 2e. The number of scholarly articles found in each year by Google Scholar for classic statistics packages after the curves for SPSS and SAS have been removed.

Freeing up so much space in the plot allows us to see that the growth in the use of R is quite rapid and is pulling away from the pack. If the current trends continue, R will overtake SPSS to become the #1 software for scholarly data science use by the end of 2018. Note however, that due to changes in Google’s search algorithm, the trend lines have shifted before as discussed here. Luckily, the overall trends on this plot have stayed fairly constant for many years.

The rapid growth in Stata use seems to be finally slowing down.  Minitab’s growth has also seemed to stall in 2016, as has Systat’s. JMP appears to have had a bit of a dip in 2015, from which it is recovering.

The discussion above has covered but one of many views of software popularity or market share. You can read my analysis of several other perspectives here.

15
Jun

MANOVA from beginning to end: Reliability

Julia yellingWhere is the Multivariate Analysis of Variance ?

You promised there would be MANOVA ! Now we’re in the third post!

First there was recoding of variables.

Then, there was creating scales. 

Now, we’re looking at reliability.

Patience is a virtue.

Before we get to doing a MANOVA we want to be sure that our dependent and independent variables are reliable and valid. Let’s move on to reliability.

I’m going to do a correlation matrix and a Cronbach alpha, which is a measure of internal consistency. The rationale is that if items all measure the same construct – say, knowledge of health practices, or autonomy or acceptance of wife beating – then those items should be related to one another. An alpha of 0 would indicate the covariance of items in the scale are zero, so, your scale sucks. An alpha of .95 would mean your scale is amazingly consistent.

So, I did three analysis for my three scales

Title "Health Variables " ;
proc corr data=example alpha ;
var hbs1 hbs3-hbs7 ;

Title "Wife beating variables" ;
proc corr data=example alpha ;
var GR34 - GR39 ;

Title "Decision Variables" ;
proc corr data=example alpha ;
VAR D_GR1A GR2A D_GR3A D_GR4A GR5a GR6A D_GR7A GR8A
D_GR9A GR9F D_GR10A D_GR12A GR10F GR12F ;

Let’s skip the simple statistics, mean, etc. you get from these analyses and go to the alpha

Screen Shot 2017-06-14 at 9.48.47 PM

The alpha for the health scale is pretty bad. The value for the raw scores is .31, for standardized items, still really bad at .32.  When we look at how deleting a variable would improve the alpha, if we dropped the first variable , the alpha would go up to .34 – but that is still awful.

For the wife-beating scale the raw value for alpha was .81 and also for the standardized value. So, that one was pretty good as far as reliability.

I put all of the decision variables together, the ones on whether the woman was involved in making decisions, could go places on her own, needed to ask permission to go places. The Cronbach alpha for the raw variables was .65, for standardized variables .81. Note that standardized variables are placed on the same metric, so my idea of some variables being much more important than others did not pan out.

So … I standardized the variables, then I read in that data set and created two scales, one that was a sum of the decision  variables and the other that was the mean of the 6 wife-beating variables. There was no particular reason for using the mean of the six variables as opposed to just adding them up. I did both methods to show it was an option.

BEWARE THE SUM FUNCTION – Note, I did not use the sum function. If you add up the values, as shown below, and one of the variables has a missing value then the value of the sum is going to be missing. If you used the SUM function, the variables that have non-missing values would be added up, so the missing value would be treated as a zero. There are times where that is acceptable. This is not one of those times.

While I’m at it, I want to check whether the scales have approximately normal distributions. A perfectly normal distribution would have skewness and kurtosis values of 0.

proc standard data=example mean=0 std=1 out=MAN_data;

Data create_manova ;
set man_data ;
* I could have used the mean function here, but I didn't ;
decision = D_GR1A + GR2A + D_GR3A + D_GR4A + GR5a + GR6A + D_GR7A + GR8A +
D_GR9A + GR9F + D_GR10A + D_GR12A + GR10F + GR12F ;
beating = mean(of gr34-gr39);

proc univariate data=create_manova ;
var decision beating ;

The skewness values were relatively low: -1.3 and 0.2 for the two scales and kurtosis values were 2.0 and -1.2  . Since my scales aren’t a radical departure from normality, I’m now going on to MANOVA – finally!

When I am not writing about statistics, I’m making games that teach math, social studies and language.

Check them out.

screen shots from our games

15
Jun

MANOVA from beginning to end : Creating the scales

Last time, we saw how to recode variables to score answers correct or incorrect, on a rating scale and weighted by importance. Today, we’re going to look at creating some scales from those variables because for reasons I’m sure I have written about at some point in the past, single items are usually not very reliable. Whether you use SAS, SPSS, R or any other statistical package, you are still going  to need to follow the steps of recoding your variables and creating and validating your scales before you get into MANOVA. Or, at least, you will if you are smart.

First, I want to check that there are no obvious errors or other problems in my data.
PROC MEAN DATA=example ;
VAR gr2A -- gr39 hbs1 --d_gr12a ;

You could type in the variable names but that is a lot of typing. The double dashes mean to include all variables in the data set in order from the first variable to the one that comes after the dashes. How do you know what order the variables are in? Click on the OUTPUT DATA tab at the top and look to the left under COLUMNS.

output da

If you didn’t just run a program creating your data and hence don’t have an OUTPUT DATA tab, you can find your data file by clicking the MY LIBRARIES tab and then clicking on the library (directory) where your data are kept and clicking on the dataset to open it. You can also use the PROC CONTENTS procedure but today we are being all pointy and clicky with SAS Studio.

Sometimes you will see something like:

VAR item1 – item12 ;

The single dash is used for variables that end in a number and if you don’t have item1, item2 all the way through item12, it will give you an error and not run. Then you will be sad.

PROC MEANS will give you the N, mean, standard deviation, minimum and maximum.

Here are a few things to consider.

  • Is the N substantially less than you had expected? If so, you have a lot of missing data and you should investigate that. The lowest N I have is 37, 814 out of 39, 430 people so not bad, but I might want to look at that one item, since most of the items have close to 39,000 for an N
  • Is your standard deviation zero? STOP RIGHT THERE!  On just what variable could 39,000 people give the same response? This likely shows a big problem with your data. I did not have that problem, so I continued.
  • Are your minimum and maximum the minimum and maximum possible scores for the item? Now, this may not always be the case. On a scale of 1 to 10, say, with a sample of 50 people, maybe no one will say 1. However, I have over 39,000 people and the items are 0 or 1, o – 2  or 1- 3, so I should have people from the minimum to the maximum or something is wrong. Nothing is wrong, and I continue.
  • Are the means about what you expect? Well, I’m not really an expert on social structure and family relations in India, so I can’t say. About a third of the women said it was usual for a husband to beat his wife if her dowry was not what was expected. About three-fourths said they would be allowed to visit a family or friend’s home alone.

Okay, so my results from the means procedure looks okay. Now what?

Next, I’m going to do a factor analysis to see if my supposition is supported of three scales related to health, beating your wife and autonomy.

Here is the code for my factor analysis.

PROC FACTOR DATA =example SCREE ROTARE= VARIMAX NFACTORS=5;
VAR gr2A -- gr39 hbs1 --d_gr12a ;

This is actually the second one I ran. In inspecting the results for the first, between the eigenvalues and scree plot, I decided that at most I should retain five factors. I’ve written a lot about factor analysis on this blog previously, so I’m not going to go into detail here.  In short, the decision-making variables mostly loaded on the first factor with factor loadings of .70 and higher. The median communality estimate for those items was about .67.  In short, considerable evidence for a decision-making factor. The wife-beating variables loaded on the second factor. All but one loaded above .67, and even that variable (Beating your wife if she had an extramarital affair – which 84% of the women said was accepted in their communities) loaded at .40. The variables regarding needing permission to go places loaded on the third factor and also had high communality estimates. The variables regarding going places by yourself loaded on the fourth factor and also had high communality estimates.

The health variables were a different story. Four out of six loaded between .47 and .67 on the fifth factor. The other two did not load on any factor.

It is starting to look like at this point that it is okay to retain the wife-beating items as a scale. The various measures of autonomy  – decision-making, going places on your own and needing permission – seem to hang together within factors. I think it would be reasonable to put all three of these together in one scale. I talked about parceling in the past, and I could have done that as a step here, and then re-run the factor analysis to support (or not) my supposed autonomy factor. Since I have limited time and simply doing this analysis for educational and illustrative purposes, I skipped over this to the next procedure, which is reliability analysis.

Since this post is pretty long already, I’ll save that for the next post.

When I am not writing about statistics, I’m making games that teach math, social studies and language.

Check them out.

screen shots from our games

11
Jun

MANOVA beginning to end: Recoding Data is Part of the Process

Other people want to go see the new Wonder Woman movie. I’ve been wanting to talk about MANOVA, but first, we need some decent dependent and independent measures.

I have the India Human Development Survey data on over 39,000 women and my hypothesis is that education is related to women’s rights’ issues, especially autonomy, health practices knowledge and domestic violence. I also think that mobility might be related, as women who get out of their native village might be exposed to new ideas.

Before I can test out my (supposedly) brilliant hypotheses, I need to create some variables because it turns out when they were collecting data in India in 2011 they were not thinking about my convenience. (Yes, I, too, am appalled by this lack of consideration.)

Independent Variables

First, I will need to create my independent variables from

EW11 Differences in family by mobility

1= same village/ town

2= another village

3 = another town

4 = metro (since only 1% fall in here, I’m going to delete this category)

and education (see below)

Items that will go into dependent variables (maybe)

HEALTH QUESTIONS

HB1 Milk harmful

HB3. 1st milk good for baby 

Hb4 chulha smoke good

Hb5 child diarrhea drink more

Hb6 illness spread through water

Hb7 malaria spread

DECISIONS

The items below are scored 1 if the respondent decides, 0 if the respondent does not decide. (More than 1 person can decide, so if both husband and wife decide, the answer will be 1 for both. In this case, I just looked at if the wife had a say in the decision.)

  • GR1a Cooking
  • GR2A Expensive purchases.      
  • GR3A Decides number of children
  • GR4A Decides what to do if sick
  • GR5A Decides whether to buy land  
  • GR6A Decides wedding expense
  • GR7A Decides if child is sick
  • GR8A Decides who your children should marry

The items below are score 1 if the woman is allowed to do these things alone and 0 if she is not.

  • GR9F Can visit health center alone
  • GR10F Can visit relative/ friend alone
  • GR12F. Can go short distance alone

These items relate to whether the woman needs to ask permission for activities, with  0 = no, 1 = must inform someone and 2 = yes

  • GR9A Ask permission to visit health center
  • GR10A Ask permission to visit relative
  • GR12A. Ask permission to travel by bus/train

 

WIFE BEATING QUESTIONS

GR34 – GR39  – All of these relate to under what circumstances it is acceptable, coded yes = 1 or 0 = no.

As you can see, well, I hope you can see, each of these presents a different date re-coding problem.

  • Mobility and education needs to be coded into categories (there is a minor reason I will explain in a later post why this is not necessary but convenient), with the fourth category deleted,
  • Health questions need to be scored as correct or incorrect.
  • Decision questions are all scored equally – so deciding what food  to cook and how many children you have are each scored a 1. I think that’s not right and I want to weight some decisions more than others.
  • Independence questions need to be reverse coded, so not asking permission is a 2 and asking permission is a 0
  • Wife-beating questions need no recoding

So … here we go. The first thing we’re going to do is create categories. Notice I don’t do anything with the category 4 for mobility, so those people will just have a missing value for MOBILITY and be dropped from the analysis.

Also, a note on ELSE as opposed to just IF statements.

I could just use all IF statements but that would be inefficient. It doesn’t really matter here with 39,000 records but if I had millions it would slow down processing. The ELSE statement is only processed if the preceding IF statement is false.

NOTE!!!  In the second set of IF- ELSE statements, I have

else if ew8 < 9 and ew8 ne . then education = “ELEM”;

This statement is only executed IF the preceding IF statement was false.  Without the ELSE, everything less than 9, including those who had 0 years of education, would be set to ELEM.  Without the and ew8 ne .  in this statement, anyone that had missing data would be set to ELEM along with anyone who had 1-8 years of education.


data example ;
set mydata.india ;
If EW11 = 1  then Mobility = “None” ;
else if EW11 = 2 then mobility = “Vill” ;
else if EW11 = 3 then mobility = “TOWN”;

if ew8 = 0 then education = “NONE” ;
else if ew8 < 9 and ew8 ne . then education = “ELEM”;
else if ew8 > 8 then education = “HS +”;

*** The statements below recode the health items ;

*** For hb1 the correct answer is 0, so  1-hb1   will score respondents who said 0 as correct (= 1) and those who said 1 as incorrect (=0);

*** For hb3 the correct answer is 1, so respondents who said 1 are scored as correct (= 1) and those who said any number higher than 1 as incorrect (=0);

*** For hb4 – hb7, the correct answer is scored as correct (=1) and any numbers in the incorrect set scored as incorrect (=0);
*** HEALTH QUESTIONS ;
hbs1 = 1- hb1 ;

If hb3 = 1 then hbs3 = 1 ;
Else if hb3 > 1 then hbs3 = 0 ;
If hb4 = 2 then hbs4 = 1 ;
Else if hb4 in (1,3) then hbs4 = 0 ;
If hb5 = 2 then hbs5 = 1 ;
Else if hb5 in (1,3,4) then hbs5 = 0 ;
If hb6 = 2 then hbs6 = 1 ;
Else if hb6 in (1,3,4) then hbs6 = 0 ;

If hb7 = 3 then hbs7 = 1 ;
Else if hb7 in (1,2,4) then hbs7 = 0 ;

 

/* DECISION QUESTIONS */
/* ALSO INCLUDES ADDITIONAL ITEMS NOT RECODED */

**** Here, I multiplied items by a factor based on my estimation of importance ;
D_GR1A = GR1A* 0.5 ;
D_GR3A = GR3A * 10 ; * BECAUSE I THINK IT’S IMPORTANT ;
D_GR4A = GR4A *2 ;
D_GR7A = GR7A *2 ;

**** These items are subtracted from 3 so doesn’t have to tell anyone = 2 ;

****  Needs to inform someone = 1 and needs to ask permission = 0 ;
D_GR9A = 3 – GR9A ;
D_GR10A = 3 – GR10A ;
D_GR12A = 3 – GR12A ;

**** KEEPS THE VARIABLES I PLAN TO USE ;
Keep EW8 EW5  Ew6 EW10  EW14a   EW12a EW12b
HBS1 HBs3-HBS7 D_GR1A GR2A D_GR3A D_GR4A GR5a GR6A D_GR7A GR8A
D_GR9A GR9F D_GR10A D_GR12A GR10F GR12F GR34 – GR39 mobility education;

So, there we go. You might think I would dive into a Multivariate Analysis of Variance now but you would be wrong. The next thing I am going to do is check the validity of my scales through a combination of factor analysis, univariate statistics and reliability analysis. Only after  that step will I do the MANOVA.

9
Jun

An Introduction to Repeated Measures ANOVA

I’m teaching a course on multivariate statistics and for some of the students it’s been a minute since their last inferential statistics course.

So, I have been doing a few videos here and there to refresh, for example, what is a repeated measures ANOVA and why you might want to do it.

 

Sometimes I use repeated measures ANOVA to test whether our games are effective in improving math scores (they are!). You can check out the games here.

attacking the aztecs

If you are interested in being a beta tester for our first bilingual game that teaches statistics, please email info@7generationgames.com

Back to Top