Hi, I am new to SAS and I am trying to input data from the a .txt file. Could someone please help me figure out why the IF-THEN statement only applies to the first row of the data? Thanks! Data: DATA work.condo_ranch;
INFILE '/home/u63748921/SAS 123 Problems/sas123_p13.txt' DSD;
INPUT style $ @;
IF style = 'RANCH' OR style = 'CONDO' THEN INPUT sqfeet bedrooms baths street $ price : dollar10.;
RUN; Output:
... View more
As I recently posted here, I'm trying to create Table 1 for a research journal article. @Reeza pointed me to the Table 1 macro "TableN" and it looks like exactly what I need.
I downloaded the macro and ran it, and then I tried to call it based on the example provided.
/*Example macro call*/
/*%tablen(data=example, by=arm,
var=age date_on sex race1 smoke_st num_met,
type=1 3 2, outdoc=~/ibm/example1.rtf);*/
/*My version*/
%tablen(data=have, by=TM_group,
var=DEM_AGE DEM_SEX,
type=2 2, outdoc="C:\data_output\test.rtf);
But I got an error after "by TM_group": ERROR: All positional parameters must precede keyword parameters. I looked up the error and it has something to do with the commas between the parameters. After some trial and error of deleting commas, I got it to run without error, but it doesn't produce any output.
Sample data is provided below:
data have;
infile datalines dsd dlm=',' truncover;
input DEM_AGE DEM_SEX cohort_flag TM_group;
datalines;
3,1,1,0
2,1,1,1
3,2,1,1
3,2,1,1
3,2,1,0
2,2,1,1
2,1,1,1
3,1,1,1
2,1,1,1
3,2,1,0
2,1,1,0
2,2,1,1
3,2,1,0
2,2,0,
3,2,1,1
3,2,1,1
3,1,1,0
3,2,1,0
2,1,1,0
3,1,1,1
3,2,1,1
3,2,1,0
3,2,1,1
3,2,1,1
3,2,1,1
; RUN;
proc format library=temp;
value age2grp
1='1:Age Group <65'
2='2:Age Group [65,75)'
3='3:Age Group >=75'
.='Inapplicable/Missing';
value sex
.='Inapplicable/Missing'
1='1:Male'
2='2:Female'
value yesfmt
1='1:Yes'
2='2:No'
.='Inapplicable/Missing'
;
RUN;
... View more
Hi, I have 2 data sets like below. Data one A 1 A 2 B 1 B 2 Data two A Apple A Peach B Banana I need to join them (Cartesian join), but only join by ID (A and B). This is what I want to get: A 1 Apple A 1 Peach A 2 Apple A 2 Peach B 1 Banana B 2 Banana If I use the below code, proc sql; create table three as select one.* ,two.* from one ,two; quit; What I have got was A 1 Apple A 1 Peach A 1 Banana A 2 Apple A 2 Peach A 2 Banana B 1 Apple B 1 Peach B 1 Banana B 2 Apple B 2 Peach B 2 Banana Is there any way to only join by the ID (A, B)? Thank you very much!
... View more
refer to an old post below. i searched , but can't find any sample program. can anyone post a specific link for sample program? thanks. Solved: Nonmem Program - SAS Support Communities
... View more
I'm working on an actuarial project to estimate monthly probabilities that someone becomes disabled. A portfolio of persons (each having a different 'PolicyNr') is observed during 12 months, and the time until disability is registered by the variable 'TimetoDisability'. When no disability occured during the 12 months, the variable 'RightCensored' has the value 1. We further have the variables 'Gender', 'AgeatDisability' (which equals the age at the disability, or the age after 12 months for the right censored observations), and the variable 'OccupationClass'. We have data available in a wide format. For example, the following lines are part of the data: PolicyNr TimetoDisability RightCensored Gender AgeatDisability OccupationClass 001 2 months 0 Male 40 year 1 002 3 months 0 Male 30 year 2 003 12 months 1 Female 42 year 1 Intuitively, I would model this using a Cox proportional hazard model, with the variable 'TimetoDisabilty' as the time until the occurrence of the disability, and 'Gender', 'AgeatDisability', and 'OccupationClass' as covariates. Monthly probabilities are derived from the survival function. Now assume -because of practical/technical reasons- it is only possible to perform a GLM Binomial regression. I read that performing a GLM Binomial regression on data with pseudo observations is analogue to a Cox Discrete Time Survival model. To prepare the analysis, I transform the wide dataset to a long dataset (with pseudo observations, see, e.g., https://grodri.github.io/glms/notes/c7s6), in which each line is duplicated according the variable TimetoDisability. For example, the first line from the table above is transformed to 2 lines as is took 2 months to become disabled. The last line has a value 1 for the variable 'Disability', as the disability occured in month 2. The variable 'AgeatDisability' is transformed into the variable 'Age', now representing the age during that month. The right censored observation is transformed into 12 lines, all having the value zero for the variable 'Disability', as the disability is not observed. This becomes: PolicyNr Duration Disability Gender Age OccupationClass 001 1 0 Male 39 year 11 months 1 001 2 1 Male 40 year 1 002 1 0 Male 29 year 10 months 2 002 2 0 Male 29 year 11 months 2 002 3 1 Male 30 year 2 003 1 0 Female 41 year 1 months 1 003 2 0 Female 41 year 2 months 1 003 3 0 Female 41 year 3 months 1 003 4 0 Female 41 year 4 months 1 003 5 0 Female 41 year 5 months 1 003 6 0 Female 41 year 6 months 1 003 7 0 Female 41 year 7 months 1 003 8 0 Female 41 year 8 months 1 003 9 0 Female 41 year 9 months 1 003 10 0 Female 41 year 10 months 1 003 11 0 Female 41 year 11 months 1 003 12 0 Female 42 year 1 Question: In this long data format, the multiple rows (pseudo observations) for each person are not independent. We have repeated measures for each person. However, I read in Therneau and Grambsch: (quote) "One concern that often arises is that observations [on the same individual] are "correlated," and would thus not be handled by standard methods. This is not actually an issue. The internal computations for a Cox model have a term for each unique death or event time..." So for a Cox Discrete Time Survival model, the dependency is not an issue. However, I don't see how the dependency in the data is not an issue for a GLM Binomial regression? Is it -given the dependency in the data- appropriate to perform a GLM to get trustworthy estimates of monthly probabilities? Or should I go for a mixed effect model? Thank you.
... View more