Recently in the SAS Community Library: SAS' @StuartRogers provides a close look at the new Microsoft Entra Gallery application and details how it can be used.
Hello,
I am running an interaction but need to reorder the x axis variables from how it is displayed below to PRISm, GOLD 0 - GOLD 4. Additionally, I need to change the legend diabetes_P1 to 'diabetes at baseline' and the Title to 'Change in FEV1pp'.
Proc GLM data=working4 (rename=(change_fev1pp = Change));
class finalgold_P1 (ref='GOLD 0') diabetes_p1 (ref="No");
model Change = finalgold_P1 diabetes_p1 finalgold_p1*diabetes_p1;
lsmeans diabetes_p1 finalgold_P1/ pdiff;
format finalgold_p1 Baseline_GOLD_Stage. diabetes_p1 diabetes_baseline. diabetes_P3 diabetes_final.;
label Change ='Change in FEV1pp';
run;
Output:
I've tried:
1. Using rename for diabetes_P1. This does not work, I think due to the formatting of diabetes_P1 to diabetes_baseline.
2. I tried adding Title = 'Change in FEV1pp Over 10 Years' into the code. There are no errors, but it doesn't change the title.
Someone suggested I use ODS but I am unfamiliar with using that code. Is there an easy way to incorporate what I need into this code?
... View more
Hi, Can anybody help me with formatting the datetime correctly in my program? I am mostly getting blank columns or getting the date as 01Jan1960:9:25:00 Etc. Here's the code I am using data x; set x; y_new = input(y, andtdtm.); format y_new datetime22.; run; And here's how some values in the datetime column look like when imported into SAS: 45348.3506829051 45349.8328009028 45350.7706134028 Thank you! Best regards, Abhishek
... View more
I am currently analyzing the impact of an intervention on medication numbers using difference-in-difference analysis, but I have encountered several challenges. Following the SAS support instructions, I conducted the difference-in-difference analysis. However, I noticed a discrepancy between my results and SAS's example (Usage Note 61830: Estimating the difference in differences of means). In the example, the value of 'Mean Estimate' in 'Contrast Estimate Results' is identical to the 'Estimate' in 'Least Squares Means Estimate'. However, in my case, these values were different. I suspect this could be due to my use of the negative binomial distribution with a log link, resulting in exponential values. Consequently, I am unsure whether to rely on the 'Mean Estimate' in 'Contrast Estimate Results' or the 'Estimate' in 'Least Squares Means Estimate', and how to interpret the results." Contrast Estimate Results Label Mean Estimate Mean Confidence Limits L'Beta Estimate Standard Error diff in diff 1.51 1.49 0.41 0.0051 a*b Least Squares Means a b Estimate Standard Error z value Pr > |z| 1 1 0.77 0.00434 178.19 <.0001 1 0 0.03 0.00508 6.5 <.0001 0 1 0.72 0.00408 177.71 <.0001 0 0 0.40 0.00426 93.11 <.0001 Least Squares Means Estimate Effect Label Estimation Standard Error z value Pr > |z| time*hospitalize diff in diff 0.41 0.00509 81.01 <.0001
... View more
Hi, I'm currently merging multiple datasets and consistently using the following SQL code to verify there are no duplicate IDs at each stage of the process: proc sql; select count(distinct ID) as UniqueIDs, count(*) as NObs from dataset; quit; This approach effectively identifies duplicates when datasets have multiple timepoints, and I make sure to only select one timepoint per ID. Throughout the merging process, my checks confirm that all IDs remain unique (e.g., 1500 observations and 1500 unique IDs). However, after the final merge, the same SQL query unexpectedly shows 1500 observations but only 1300 unique IDs. Manual verification confirms that duplicates are present, despite prior checks showing no such issues. I'm looking for insights into why these duplicates weren’t detected sooner by the SQL query, or if there's a specific merging condition I might have overlooked. Edited to add: The SQL code above is the one that I use to check each dataset for duplicates (which it seems to find pretty well) and after each merge. Merge code (throughout and also the last one): data merged_dataset; merge dataset1 dataset2; by ID; run; Attaching the LOG for the last merge + SQL check. The result for this was 1181 UniqueIDs and 1229 NObs, while previously I was getting 1229 UniqueIDs and 1229 NObs. LOG for the last merge + SQL check
... View more
I need to define column width to standardize the look of the report. However, when I set the column width it automatically expands the row height for each cell. I tried setting the cellheight in various places in the code but I can't seem to constrain it. How is this done?
Here's an example using the sashelp.cars dataset that gives me the same results.
ODS results off;
ODS listing close;
ODS TAGSETS.EXCELXP
file="C:\test.xml"
STYLE=Printer
OPTIONS (
Sheet_Name = "NEW"
Orientation = 'landscape'
FitToPage = 'no'
Pages_FitWidth = '1'
Pages_FitHeight = '100'
embedded_titles = 'yes'
);
PROC REPORT DATA=sashelp.cars
style(header)=[fontfamily=helvetica fontsize=8pt textalign=l]
style(column)=[fontfamily=helvetica fontsize=8pt textalign=l TAGATTR='format:text'];
columns make model type origin msrp drivetrain horsepower mpg;
DEFINE make / STYLE(column)={width=2cm};
DEFINE model / STYLE(column)={width=2cm};
DEFINE type / STYLE(column)={width=2cm};
DEFINE origin / STYLE(column)={width=2cm};
DEFINE msrp / STYLE(column)={width=15cm};
DEFINE drivetrain / STYLE(column)={width=2cm};
DEFINE horsepower / STYLE(column)={width=2.5cm};
DEFINE mpg / STYLE(column)={width=2cm};
RUN;
ods tagsets.excelxp close;
... View more