Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
21 views6 pages

Diagnostic Test

The document describes statistical tests performed on study data to test for normality, multicollinearity, and heteroscedasticity. Normality tests showed data was normally distributed. Multicollinearity tests found no issues with independent variables being related. Heteroscedasticity tests were also passed, showing populations were independent. Reliability and factor analysis were also conducted.

Uploaded by

Womayi Samson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views6 pages

Diagnostic Test

The document describes statistical tests performed on study data to test for normality, multicollinearity, and heteroscedasticity. Normality tests showed data was normally distributed. Multicollinearity tests found no issues with independent variables being related. Heteroscedasticity tests were also passed, showing populations were independent. Reliability and factor analysis were also conducted.

Uploaded by

Womayi Samson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

3.

11 Diagnostic Tests
3.11.1 Normality test

To test whether the data was normally distributed, the study used two statistical

tests of normality that is the Kolmogorov-Smirnov and Shapiro-Wilk were

performed on the study variables. The findings as seen in Table 5 below show that

p-values are greater than 0.05 both in Kolmogorov and Shapiro-Wilk, thus implying

that the assumptions of normality were satisfied in this study

Table 1: Results of Kolmogorov and Shapiro-Wilk test


Tests of Normality Kolmogorov-Smirnova Shapiro-Wilk

Statisti Df Sig. Statisti Df Sig.


c c

Customer Satisfaction .041 391 .110* .425 391 .250

Customer Orientation .587 391 .423* .368 391 .156

Social Network Interactions .451 391 .254* .646 391 .259

CRM based technology .163 391 .312* .872 391 .073

Employees’ behavior .169 391 .344* .451 391 .158

*. This is a lower bound of the true significance.


a. Lilliefors Significance Correction

3.11.2 Multicollinearity test

The general assumption in regression model in table 6 is that predictor variables

used in the study should be independent of each other. Multicollinearity exist

when there are high linear relationships between two or more explanatory

variables, going up against the assumption that explanatory variables in a study

should be independent of each other (Alabi, Ayinde, Babalola, Bello and Okon,

2020). The Variance Inflation Factor was used whereby if the VIF value is between

1 and 10 indicate no multicollinearity but if its below 1 and above 10 it means that

multicollinearity exists (Velnampy et al., 2014). Based on the coefficients output


relating to collinearity, the VIF for all of the independent variables were between

1 and 10 implying that no multicollinearity was detected between the independent

variable and dependent variable.

Table 2: Multicollinearity test results for the study variables


Variable Tolerance VIF
Customer Orientation 0.556 1.716

Social Network Interactions 0.410 2.421


CRM based technology 0.400 2.104
Employees’ behavior 0.450 1.578

3.11.3 Test for Heteroscedasticity

Heteroscedasticity is a major concern in the application of regression analysis,

which always occurs in cross sectional data, when the variances of the error terms

are no longer constant, it is often investigated with the ideology of relationship

between error terms and exogenous variables (Alabi, Ayinde, Babalola, Bello and

Okon, 2020). Violation of this assumption makes coefficient estimates less precise

increasing the probability that the estimates are not a true representation of the

population. The study used Levene’s test to detect homogeneity and where the

value was above 0.05 it meant that the populations were independent observations

(Okon et al., 2020). All the study variables used had a value above 0.05 which

indicates that heteroscedasticity was not violated between the independent

variable and the dependent variable in table 7 below.

Table 3: Heteroscedasticity test results for the study variables


Variable Levene Statistic Sig.

Customer Satisfaction 2.154 0.631


Customer Orientation 1.354 0.125
Social Network Interactions 1.371 0.638
Employees’ behavior 1.561 0.961

3.11.4 Reliability

Reliability is an essential measure of consistency and stability in the measurement

of a concept (Drost, 2011). Internal consistency will be tested using Cronbach

Alpha reliability test of alpha= 0.70 or above (Taber, 2017). A reliability coefficient

below 0.7 will be poor and unacceptable. A low value of alpha can be due to a low

number of questions, poor interrelatedness between items or heterogeneous

constructs such values will be rejected while alpha values between 0.7 and above

was accepted as indicated in Table 8.

Table 4: Reliability test results


Reliability Analysis Cronbach's Alpha No. of Items
Customer Satisfaction 0.742 15
Customer Orientation 0.736 5
Social Network Interactions 0.799 5
CRM based technology 0.812 5
Employees’ behavior 0.830 5
Overall Cronbach’s Alpha 0.784

3.11.5 Factor Analysis

The factor analysis method through Principal Component Analysis (PCA) was used

as extraction technique and Varimax as rotation technique to test the sampling

adequacy and appropriateness of data for further analysis. The results show that

KMO and Bartlett's Test was 0.653 and p-value was 0.00 which is acceptable and

appropriate for the study as indicated by Pallant (2010).

Furthermore, applying the Principal Component Analysis technique and Varimax

rotation technique, a cut-off points of 0.7 was employed as recommended by


Pallant (2010) and Tundui (2012) that minimum acceptable cut off point is

normally above 0.50.

A number of statements from the Independent and dependent variables were

eliminated due to poor factor loading compared to the other statements, ‘NWSC

employees exhibit strong commitment in informing customers of new innovations’,

‘There is improved and faster decision making because of the tools used’, ‘Sending

e-messages to customers enhance customer satisfaction’. The rest of the variables

had internal consistency and thus reliable for further analysis of the study.

Table 5: KMO and Bartlett's Test


Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .653
Bartlett's Test of Approx. Chi-Square 557.2
Sphericity
df 390
Sig. .000

DATA ANALYSIS

3.12.1 Research Data Analysis.


The data gathered will be analyzed and presented using descriptive statistics. Schacher,
(2002), has suggested that descriptive studies be analyzed using descriptive statistics.
Descriptive statistics include tabulation and organization of data to demonstrate their main
characteristics and involves use of techniques such as measures of central tendency, measures
of dispersion, correlation and graphical presentations.

3.13 Analytical Model in operationalizing the study variables


3.13.1 Correlational between RBIA Approaches and ICS Implementation
The data collected will be analyzed in order to determine the relationship between RBIA
approaches and implementation of internal control systems in healthcare service delivery
NGOs in Uganda.

To establish the relationship between RBIA approaches and Internal Control Systems
Implementation among healthcare service delivery NGOs in Uganda, a Pearson’s Product
Moment Correlation Coefficient will be pre-ceded by testing for the linearity of the data that
will be collected. A scatter diagram will be drawn with a line of best fit to capture this data
format. The linear pattern that will emerge between both variables will reveal a relationship.

Pearson’s-Product moment correlation coefficient will be computed following the formula


below;

n ( ∑ xy ) −( ∑ x )( ∑ y )
r xy=
√{¿ ¿ ¿

Where;
n=¿ The number of paired observations,
∑ xy = The sum of the gross product of RBIA Approaches or factors and ICS
Implementation
∑ x 2 =The sum of all the squared values of RBIA Approaches or factors,
∑ y 2 =The sum of all the squared values of ICS implementation,
( ∑ x ) 2= The sum of RBIA approaches Squared
( ∑ y ) 2=The sum of ICS Implementation factors squared

3.13.2 Regression Model.


The data collected will be analyzed in order to determine the causal Effects of combined
RBIA approaches on Internal Control Systems Implementation among healthcare service
delivery NGOs in Uganda.

Regression analysis will be used. The β coefficient from the equation will represent the
strength and direction of the relationship between the variables being studied.

A linear regression technique will be employed, and the results will be computed basing on
the linear regression model below;

Y = β0 + β 1 X 1 + β 2 X 2 + β 3 X 3 + β 4 X 4 + β5 X 5 … … β n X n−1+ ε
Where; Y = Dependent variable (Internal control systems implementation (ICSI)
X 1 =¿ The Risk Governance
X 2 =¿Continuos Professional development
X 1−n = Are the independent variables which are described above to infinity
β 0=¿ This a Constant
β 1−n= The regression coefficients or change induced in Y (ICSI) by each X (RBIA)
approaches
ε =¿ Error term

3.14 Structured Model path Coefficient.


In order to test the hypothesis, the researcher will use structured equation modelling using
Partial Least Square (PLS). Partial least Square (PLS) is an extension of traditional least
squares based regression and uses iteration to extend regression principles for the case of
more complex constructs (Hair et al. 2013). The analysis will be done by testing the
relationships among the constructs and the hypotheses are tested using the structural model.
Path coefficients of the structural model can be obtained through PLS algorithm calculation
(Ramayah, 2014). Path coefficients indicates the hypothesized relationships among the
variables, whether its positive or negative. PLS bootstrapping calculation will be run after
PLS algorithm calculation in the structural model to obtain t-value. A Commonly used
critical values for one-tailed tests are 1.645 with the significance level at 95%, and 2.33 with
the significance level of 99% (Ramayah, 2014). When the t-value is greater than the critical
value, it will be concluded that the path coefficient is significant. Hair et al. (2013) posited
that there are four key criteria for assessing the structural model in PLSSEM. These include
assessments of:

(i) Significance of the path coefficients,


(ii) Coefficient determination (R²),
(iii) The effect size (f²), and lastly
(iv) Predictive relevance (Q²).

You might also like