SYSTAT 12 has extend the scope of analysis of linear models like Regression, Analysis of Variance and General Linear Models (GLM) to tackle correlated data, clustered data, dependent data and heteroscedastic data. Use SYSTAT's Mixed Model Analysis to analyze various types of linear mixed effects models like variance components models, hierarchical mixed models, and mixed regression. Obtain different types of estimates of fixed effects parameters and variance components, predictions of random effects, all with confidence intervals and hypothesis tests.
Develop, Improve and Optimize the Performance of a Process
Using data from a well-designed experiment, invoke Response Surface Optimization to work out the levels of input factors and process settings that will produce the best yield in terms of product characteristics. Estimate response surface parameters, carry out analysis of variance and tests of significance, compute optimum factor settings, produce contour and desirability plots, and carry out ridge analysis.
Extend your Analysis with Partial Least Squares and Robust Regression
Use Partial Least-Squares Regression when the data set includes a large number of predictor variables, greater even than the number of cases. SYSTAT provides two standard algorithms - NIPALS and SIMPLS. Obtain standard errors of the estimated regression coefficients by the Jackknife procedure. Validate the fitted regression by one of two types of cross-validation procedures, viz., leave-one-out and random exclusion.
When a standard multiple linear regression procedure shows up problems with your data set, use a suitable one of SYSTAT's Robust Regression procedures (LAD, LTS, LMS, S, M, Rank) to solve your problem.
Easily Detect Groupings in Data
SYSTAT's Cluster Analysis gives you a wide variety of distance and similarity matrices, clustering criteria, validation indices, and cutting and pruning methods to get a most satisfactory hierarchical classification or grouping from the data. Use new linkage methods which include uniform, K-neighbourhood, flexible beta, and weighted linkage. Validate your clusters with five new indices. Cut cluster trees based on leaf nodes and tree heights. Use the K-medians algorithm as an alternative to K-means.