I have very little experience working with weights, so please correct me if my understanding is wrong.
I'm trying to create a summary table of unadjusted rates of quality of care between the TM and MA groups. I was able to produce a table with Proc Ttest and ODS. However, the survey uses a complex design. I need to add a weight variable and, it appears, replicate weight variables. Unfortunately, Proc Ttest can accommodate a weight variable but not replicate weights.
Just to experiment, I tried running Proc Ttest with the weight variable, and the sig score for the variables improved. That confuses me, because the study documentation says "To permit the calculation of random errors due to sampling, a series of replicate weights were computed. Unless the complex nature is taken into account, estimates of the variance of a survey statistic may be biased downward." In other words, not using weights means underestimating the variance. And if the true variance is actually higher, shouldn't that reduce the significance level? One particular variable I looked at has a probt score of 0.0158 when unweighted, and 0.0025 when weighted.
Based on what I found in the study documentation, I'm trying to use Proc Surveyfreq instead. However, this is confusing me as well. The Pr > ChiSq score is now <.0001 for every variable, even those that were not significant when I used Proc Ttest.
Here is the code, with sample data and Proc TTest commented out. I'm only including 1 of the replicate weights here, but there are actually 100 of them:
data have;
infile datalines dsd dlm=',' truncover;
input ACC_HCTROUBL_r ACC_HCDELAY_r ADRD_group TM_group
PUFFWGT PUFF001;
datalines;
3,1,1,0,1310.792231,1957.576268
2,1,1,1,10621.60998,18588.46812
3,2,1,1,3042.093381,5484.728615
3,2,1,1,3166.358963,5497.289892
3,2,1,0,1481.272986,432.6313548
2,2,1,1,6147.605583,9371.965632
2,1,1,1,14001.79093,16689.25322
3,1,1,1,2035.685768,530.211881
2,1,1,1,6356.258972,1899.874476
3,2,1,0,1487.104781,2018.636444
2,1,1,0,5002.553584,1364.125425
2,2,1,1,2493.79145,4039.542597
3,2,1,0,2260.257377,3495.675613
2,2,0,1,9358.048737,2835.543292
3,2,1,1,2978.506348,4932.378916
3,2,1,1,2794.906054,5118.430973
3,1,1,0,1663.418821,519.7549258
3,2,1,0,2083.459361,3067.105973
2,1,1,0,5106.785048,8672.202644
3,1,1,1,3447.574748,854.6276748
3,2,1,1,2819.233426,899.849234
3,2,1,0,4067.38684,6463.15598
3,2,1,1,1249.96647,2053.666234
3,2,1,1,1730.237908,3058.307502
3,2,1,1,4932.936202,1479.55826
; RUN;
/*PROC TTEST plots=none data=have;
CLASS TM_group;
VAR ACC_HCTROUBL_r ACC_HCDELAY_r;
WEIGHT PUFFWGT;
REPWEIGHT PUFF001; /*REPWEIGHT PUFF001-PUFF100;*/
RUN;*/
PROC SURVEYFREQ data=have VARMETHOD = brr (fay=.30);
TABLE ACC_HCTROUBL_r ACC_HCDELAY_r * TM_group / row chisq lrchisq;
WEIGHT PUFFWGT;
REPWEIGHT PUFF001; /*REPWEIGHT PUFF001-PUFF100;*/
WHERE ADRD_group ^= 1;
RUN;
... View more
Hi guys,
suppose to have the following dataset:
data DB;
input ID :$20.Admission:date9. Discharge:date9. Diagnosis :$20.;
format Admission date9. Discharge date9.;
cards;
0001 06DEC2014 14DEC2014 VIRUS_A
0001 08NOV2020 11NOV2020 FLU
0004 14MAY2014 02JUN2014 FLU
0004 30JUN2015 15AUG2015 FLU
0004 16FEB2019 18FEB2019 VIRUS_A
0005 10AUG2019 11SEPT2019 FLU
....
;
I have to fit a time-series model to estimate the weekly number of hospitalizations for VIRUS_A. The dataset shown is just an example of the real dataset. I don't know how to set the "weekly" from the admission dates I have (I have also discharge dates). The study starts on 2014 but some patients are hospitalized after the start others at different months of the 2014. Moreover, does the week number start from the 01 January? If yes, what's happens if 01 Jan is in the middle of the week?
Apart the practical SAS programming, I also have not clear the theory behind mapping dates to weeks. It's the first time I deal with this data and questions.
Thank you in advance
... View more
Hello,
I used the codes at below to estimate the propensity score and logistic regression for inverse probability weighting.
How can I test the balance of the standardised mean differences before and after adjustment?
How to obtain the synthetic n values derived from weights?
Thanks
/***CREATING PROPENSITY SCORES********/
proc sort data=tab_imput; by _imputation_;run;
proc logistic data=tab_imput desc;
class var1 var2 var3 var4 var5 var6 var7 var8 var9 ;
model mut= var var1 var2 var3 var4 var5 var6 var7 var8 var9/link=logit rsquare ;
output out=denom p=d;
by _imputation_;
run;
proc logistic data=tab_imput desc;
model mut=;
output out=num p=n;
by _imputation_;
run;
proc sort data=tab_imput ;
by anonymat;run;
proc sort data=denom;
by anonymat;run;
proc sort data=num;
by anonymat;run;
data tab_imput_pscore;
merge tab_imput denom num;
by anonymat;
if mut=1 then uw=1/d; else if mut=0 then uw=1/(1-d);
if mut=1 then sw=n/d; else if mut=0 then sw=(1-n)/(1-d);
run;
proc sort data=tab_imput_pscore; by _imputation_;run;
/***PROPENSITY SCORE WEIGHTED OUTCOME MODEL****/
ods graphics on;
proc logistic data=tab_imput_pscore desc;
class mut(ref='no') / param=reference ;
model vif (event='no') = mut/ rsquare clodds=wald lackfit ;
weight sw ;
by _imputation_;
oddsratio mut;
ods output parameterEstimates = ipw_mut ;
run;
ods graphics off;
proc mianalyze parms=ipw_mut ;
modeleffects mut;
ods output parameterEstimates = ipw_mut1;
run;
data ipw_mut2; set ipw_mut1;
OR_est=EXP(ESTIMATE);
LCI_OR=OR_est*EXP(-1.96*STDERR);
UCI_OR=OR_est*EXP(+1.96*STDERR);
run;
proc print data=ipw_mut2;
var Parm OR_est LCI_OR UCI_OR Probt ;
run;
... View more
Good morning.
I'm hoping some can help because I'm out of ideas.
I'm attempting to use ODS Excel to place the page number and number of pages in the footer at a right justify. I've tried the three different methods in the example below. I've done some searching and found some information but can't seem to get it to work.
I get warning messages such as:
WARNING: Apparent symbolic reference R not resolved. WARNING: Apparent symbolic reference P not resolved. WARNING: Apparent symbolic reference N not resolved.
Can anyone give me a hint as to what I may be doing wrong here?
options(sheet_interval = 'none'
sheet_name = 'Sheet Name'
frozen_headers = '2'
row_repeat = '1-2'
suppress_bylines = 'yes'
autofilter = 'all'
pages_fitwidth = '1'
pages_fitheight = '50'
/* print_footer = '&R &P & of &N'*/
/* print_footer = 'Page &P & of &N'*/
print_footer = "Page &R &P of &N"
/* print_footer = "&;LInstitutional Research and Assessment &R Page &P of &N &Z &F"*/
absolute_column_width = '12, 30, 30, 18, 10, 10, 12, 50');
... View more