Hi, I'm trying to add a contents table in a rtf file. As I saw in other discussions and examples, the code is something like: "ods rtf file="myfolder\Table_X.rtf" style=Journalvs CONTENTS toc_data;" Everything fine, and in the rtf output, the table of contents is created and when I update the field, the contents appear. The problem is about the headers and footnotes. I have defined this headers and footnotes for the entire document, and I change it in each report, but, the TOC seems to be in a separated section of the document, and these titles are not populated. How can I add them? Also, and this is minnor at this time, I would like to know how to change the font size of the TOC elements, I tryied with proc template, but with any change.. The code: options orientation=landscape replace center nodate nonumber ;
ODS LISTING CLOSE;
ods ESCAPECHAR='^';
Title1 J=l "Study 1" J=R "Page ^{thispage} of ^{lastpage}";
Footnote1 J=L " myself";
ods rtf file="myfolder\Table_X.rtf" style=Journalvs CONTENTS toc_data;
title2 J=l "Table 1. Baseline and disease categorical characteristics.";
Footnote1 j=l "P-values obtained from Fisher exact test(*) or simulated MC Fisher test (#).; NA: Not apply.";
Footnote2 J=L " myself";
ODS PROCLABEL=' ';
proc report data=tmp.cats contents='Table 1' center style(report)={just=center} style(header)=[background=white bordercolor=black borderbottomwidth=0.1pt vjust=center] nowd split="@" style(column)=[bordercolor=white];
column order_1 order_2 order_3 col1 col2;
column order_1 order_2 order_3 col1 col2;
define order_1 / order noprint;
define order_2 / order noprint;
define order_3 / order noprint;
define col1 / display "Group" left style(column)=[cellwidth=10% asis=on];
define col2 / display "Parameter" left style(column)=[cellwidth=15% asis=on] style(header)=[bordertopwidth=0pt];
compute before order_2 / style=[just=l font_face=Courier font_size=8pt font_weight=bold foreground=black background=white];
line j=l " ";
endcomp;
compute before order_2 / style=[just=l font_face=Courier font_size=8pt font_weight=bold foreground=black background=white];
line j=l " ";
endcomp;
break after order_1 / page contents=' ';
break before order_1 / page contents=' ';
run;
title2 J=l "Table 2.";
Footnote1 j=l "P-values obtained from t-test of equal or unequal (**) variances.; NA: Not apply.";
Footnote2 J=L "myself";
ODS PROCLABEL=' ';
proc report data=tmp.conts contents='Table 2' center style(report)={just=center} style(header)=[background=white bordercolor=black borderbottomwidth=0.1pt vjust=center] nowd split="@" style(column)=[bordercolor=white];
column order_1 order_2 order_3 col1 col2;
define order_1 / order noprint;
define order_2 / order noprint;
define order_3 / order noprint;
define col1 / display "Group" left style(column)=[cellwidth=10% asis=on];
define col2 / display "Parameter" left style(column)=[cellwidth=15% asis=on] style(header)=[bordertopwidth=0pt];
compute before order_2 / style=[just=l font_face=Courier font_size=8pt font_weight=bold foreground=black background=white];
line j=l " ";
endcomp;
break after order_1 / page contents=' ';
break before order_1 / page contents=' ';
run;
ods rtf close;
ods html close;
ods listing;
... View more
When I converted the dataset with %xpt2loc I found that creating the XPT in V8 format via %loc2xpt was missing the dataset labels. Dose anyone know about it? Thanks!
... View more
I ran the following code and got the curve below.
proc lifetest data=sasuser.combine_aim2n2 plots=survival(f) timelist=(0 to 132 by 12); time month2*ui2(0); run;
Would there be a way to change the scale of x axis from 0 25 50 75 100 125 to 0 12 24 36 48 60 72 84 96 108 120 132?
... View more
Democratizing in this context means making Databricks accessible to a wider range of users within an organization, not just code focused data scientists or engineers. By combining SAS Viya's user-friendly interfaces and Databricks Lakehouse, more people within an organization can participate in data analysis, decision-making, and innovation, regardless of their technical background. This approach enables organizations to maximize the value of their data assets and drive better business outcomes.
Democratizing tribal knowledge in a digital catalog
Democratizing Databricks with SAS Viya involves converting informal and tacit tribal knowledge about data, analytics, BI dashboards, pipelines, business rules, and decisions into easily accessible digitalized knowledge. This is facilitated by the SAS Information Catalog, which provides effortless access to such information.
Irrespective of the data's location, which in this instance is a Databricks Lakehouse, the SAS Information Catalog offers users a comprehensive understanding of their data from both business and technical perspectives. On the technical front, it provides details like data size, column count, and storage format (e.g., Spark). Meanwhile, from a business standpoint, it covers aspects such as status (approved, under review, flagged etc.), associated business terms and tags, data ownership, automatic explanation, information privacy, and semantic type detection (for PII compliance).
Lastly, from an analytics standpoint, SAS Information Catalog automatically analyzes your columns to aid in understanding quality issues and determining if the data is suitable for analytics purposes.
Figure 1: SAS Information Catalog shows technical and business metadata for a table in Databricks.
Democratizing data literacy through appropriate tools
The SAS Information Catalog is a valuable resource for all users within an organization and facilitates the pursuit of a highly data-literate organization. Improving data literacy is also about providing the proper tools to users. For those eager to delve deep into their Databricks sources, SAS Visual Analytics offers a comprehensive suite of capabilities. Suitable for a wide array of users, including statisticians, data scientists, data engineers, and business analysts, SAS Visual Analytics facilitates ad-hoc analysis for a thorough understanding of Databricks sources. Users can leverage both basic and advanced analytics features extensively. Furthermore, for individuals aiming to create highly informative and visually compelling dashboards that accelerate time to market, SAS Visual Analytics seamlessly integrates robust statistical analysis with appealing business intelligence dashboards.
Figure 2: Dashboard providing insight to readmissions for hospitals in an area of Sweden.
Navigating from SAS Information Catalog to SAS Visual Analytics is as simple as clicking on the Actions menu and choosing "Explore and Visualize." This action automatically transfers the Databricks Spark source into high-speed SAS memory, ensuring immediate accessibility for a wide range of analysis capabilities within SAS Visual Analytics.
Figure 3: Analyzing a Databricks data source in SAS Visual Analytics.
Democratizing code development with the right tools
Truly skilled coders, along with exceptional data scientists and engineers, are a rare and highly sought-after group. This scarcity can lead to a potential lag in important innovations and hinder the achievement of business growth and objectives. However, trying to convert other groups of business users within an organization into proficient coders is a misguided approach. So, how can you empower the rest of your organization—individuals who possess deep business knowledge and a profound understanding of the data—to develop data and analytics pipelines without investing extensive time in enhancing their technical skills?
Patric Hamilton elucidates this development process in his latest blog post, Data Brilliance Unleashed: SAS Data Quality against Databricks - Precision, Performance, Perfection. The key lies in offering user-friendly tools capable of handling both straightforward tasks and intricate, technical coding without requiring users to write a single line of code. The crucial aspect is to ensure these tools facilitate collaboration, enabling both coders and non-coders to assist each other in transforming innovations into production.
Figure 4: SAS Studio data pipeline for entity resolution to create golden or master records.
Conclusion
This comprehensive approach to making Databricks accessible with SAS Viya involves three main strategies: using SAS Viya's user-friendly interfaces and Databricks Lakehouse to increase access, making tribal knowledge easily available through the SAS Information Catalog, and equipping users with the necessary tools for data literacy and code development. By making Databricks available to a wider range of users and providing intuitive tools for analysis and collaboration, organizations can maximize the value of their information assets and achieve better business outcomes.
Learn more about SAS and Databricks
Harness the analytical power of your Databricks platform with SAS
Data everywhere and anyhow! Gain insights from across the clouds with SAS
Elevated efficiency and reduced cost: SAS in the era of Cloud Adoption
SAS and Databricks: Your Practical Guide to Data Access and Analysis
Data to Databricks? No need to recode - get your existing SAS jobs to SAS Viya in the cloud
Maximize Coding and Data Freedom with SAS, Python and Databricks
Data Brilliance Unleashed: SAS Data Quality against Databricks - Precision, Performance, Perfection
Please note: Data used in screenshots are either fictive demo data or open public data.
... View more