Customizing a Workflow in SAS Solution for IFRS 17
Recent Library Articles
Recently in the SAS Community Library: SAS' @sunilbhardwaj details steps to add a customized Excel based workflow template to the SAS solution for IFRS 17. These steps enable you to try out and test several customized workflows based on your requirements.
I am running PROC MI for multiple imputation for a 5-level categorical variable, "gmfcs_final", which is the only variable in the dataset with missing values. The imputation phase works great (code under "STEP 1"). I then do the 2nd analysis phase (code under "STEP 2"). This analysis is focused purely on estimate the c-statistic with 95% CI for several models based on various covariate sets. This runs well and I output the c-stat and CI in the "auc_2" output file. For the 3rd step, I am unable to figure out what code to run to pool the c-stats and CI appropriately (Rubin's rules?). There seems to be code to get parameter estimates in the PROC MIANALYZE, but that is not the interest of this study. Any idea on the code using the PROC MIANALYZE or other code to get the appropriately pooled c-stats and CI? STEP 1 proc mi data=b seed=1305417 nimpute=65 out=mi_fcs; class gmfcs_final sex race ethnicity smoking_num ins yr_start WCI_score_1cl W1-W25 base_2-base_19 fx1_base fx2_base fx3_base fu_2_5yr_censrsn fu_3_5yr_censrsn fu_4_5yr_censrsn fu_5_5yr_censrsn fu_6_5yr_censrsn fu_7_5yr_censrsn fu_8_5yr_censrsn fu_9_5yr_censrsn fu_10_5yr_censrsn fu_11_5yr_censrsn fu_12_5yr_censrsn fu_13_5yr_censrsn fu_14_5yr_censrsn fu_15_5yr_censrsn fu_16_5yr_censrsn fu_17_5yr_censrsn fu_18_5yr_censrsn fu_19_5yr_censrsn fu_fx1_5yr_censrsn fu_fx2_5yr_censrsn fu_fx3_5yr_censrsn death_5yr_censrsn; var gmfcs_final age sex race ethnicity smoking_num ins yr_start WCI_score_1cl W1-W25 base_2-base_19 fx1_base fx2_base fx3_base fu_2_5yr_censrsn fu_3_5yr_censrsn fu_4_5yr_censrsn fu_5_5yr_censrsn fu_6_5yr_censrsn fu_7_5yr_censrsn fu_8_5yr_censrsn fu_9_5yr_censrsn fu_10_5yr_censrsn fu_11_5yr_censrsn fu_12_5yr_censrsn fu_13_5yr_censrsn fu_14_5yr_censrsn fu_15_5yr_censrsn fu_16_5yr_censrsn fu_17_5yr_censrsn fu_18_5yr_censrsn fu_19_5yr_censrsn fu_fx1_5yr_censrsn fu_fx2_5yr_censrsn fu_fx3_5yr_censrsn death_5yr_censrsn; fcs discrim(gmfcs_final = age sex race ethnicity smoking_num ins yr_start WCI_score_1cl W1-W25 base_2-base_19 fx1_base fx2_base fx3_base fu_2_5yr_censrsn fu_3_5yr_censrsn fu_4_5yr_censrsn fu_5_5yr_censrsn fu_6_5yr_censrsn fu_7_5yr_censrsn fu_8_5yr_censrsn fu_9_5yr_censrsn fu_10_5yr_censrsn fu_11_5yr_censrsn fu_12_5yr_censrsn fu_13_5yr_censrsn fu_14_5yr_censrsn fu_15_5yr_censrsn fu_16_5yr_censrsn fu_17_5yr_censrsn fu_18_5yr_censrsn fu_19_5yr_censrsn fu_fx1_5yr_censrsn fu_fx2_5yr_censrsn fu_fx3_5yr_censrsn death_5yr_censrsn /classeffects=include) nbiter=100; run; STEP 2 proc logistic data=mi_fcs plots(only)=roc; class sex race3 smoking_num ins2 yr_start_cat W24 W25 WCI_score_1cl gmfcs_final; model fu_2_5yr(event='1')=age sex race3 smoking_num ins2 yr_start_cat WCI_score_1cl gmfcs_final / nofit; roc 'Base model' age sex race3 smoking_num ins2 yr_start_cat; roc 'GMFCS only' gmfcs_final; roc 'WCI only' WCI_score_1cl; roc 'Base+GMFCS' gmfcs_final age sex race3 smoking_num ins2 yr_start_cat; roc 'Base+WCI' WCI_score_1cl age sex race3 smoking_num ins2 yr_start_cat; ods output rocassociation=auc_2; by _imputation_; run;
... View more
I trust that this conversation will benefit the community, and I encountered this problem while working on my website, https://pythononlinecompiler.com/. I gained valuable insights during the time the issue persisted. Data integration is the process of merging data from various sources to create a cohesive perspective. This is crucial for organizations to facilitate well-informed decision-making, guaranteeing that all pertinent data is both accessible and usable. Nevertheless, data integration presents numerous obstacles that may hinder its execution and impact. Data Integration Challenges Data Fragmentation Explanation: Data fragmentation arises when data is segregated across various systems or departments within a company. Consequence: This fragmentation hampers the ability to obtain a holistic view of information, resulting in inefficiencies and incomplete understanding. Data Accuracy Explanation: Guaranteeing data accuracy entails upholding the precision, entirety, uniformity, and timeliness of data. Consequence: Inaccurate data can lead to flawed analyses and decisions, eroding confidence in the unified data system. Heterogeneous Data Sources Description: Organizations often collect data from various sources that may use different formats, structures, and standards. Impact: Integrating heterogeneous data requires complex transformations and standardizations, which can be resource-intensive. Scalability Description: As organizations grow, the volume of data increases, necessitating scalable integration solutions. Impact: Ensuring that data integration processes can handle growing data volumes without degrading performance is challenging. Security and Compliance Description: Data integration must adhere to security protocols and regulatory requirements to protect sensitive information. Impact: Balancing integration with security and compliance can be difficult, especially in industries with stringent regulations. Real-Time Integration Description: Certain applications necessitate the integration of real-time data in order to offer the most current information. Impact: The attainment of real-time integration demands the utilization of sophisticated technologies and a resilient infrastructure, both of which can be expensive and intricate to establish. Data integration projects frequently necessitate substantial investments in both technology and human resources. The financial implications and the requirement for expertise in specific areas can pose challenges for numerous organizations, especially those of smaller scale. Establishing precise data governance policies is essential for effectively managing data integration processes. Insufficient governance may result in data misuse, discrepancies, and failure to comply with regulations. In a practical illustration, let's imagine a situation where a company aims to merge customer data from its CRM, sales, and support systems. However, the data is stored in various formats and databases, presenting a considerable obstacle. To summarize, the process of data integration is of utmost importance but also presents various difficulties that revolve around data silos, quality, heterogeneity, scalability, security, real-time processing, cost, and governance. @danieljames
... View more
Hello, there are two labels at the bottom of my plot (attached file);
I can remove the first one using " "nohlabel", which is subgroup index(month); how do I remove another label, which is “subgroup size with min and max”?
Here is my code:
proc shewhart data=tmp2_month; pchart yes_answer1*month/ markers subgroupn = n_per_group
ODSTITLE="P-chart of Pts Counseling by month" outtable = outtable turnhlabels ; LABEL yes_answer ="Prop of pts counselled"; run; proc print noobs;run;title;
... View more
I want to Apply Tests for Special Causes in the p-chart, but the sample size for each group is different. So, I don’t have a fixed UCL and LCL, and I will get the following warning with below code: WARNING: Asymmetric control limits encountered for yes_answer1 for at least one subgroup. proc shewhart data=tmp2;
pchart yes_answer1*quarter / subgroupn = total_count1
tests =1 to 8
TESTNMETHOD=STANDARDIZE
table
tablelegend;
run; Note: The SHEWHART procedure provides an option for working with unequal subgroup sample sizes. For example, I can use the LIMITN= option to specify a fixed (nominal) sample size for computing the control limits. Below is the sample size for each group that I have; my question is, how can I choose the number for LIMITN? quarter Subgroup 2021Q3 30 2021Q4 66 2022Q1 54 2022Q2 66 2022Q3 69 2022Q4 74 2023Q1 83 2023Q2 96
... View more