AssessmentExam1
Similarities/Differences between “assessment” and “testing”
Testing is gathering information at ONE POINT IN TIME, it’s often JUST ONE PART of a teacher’s assessment process. An assessment is “a strategically planned process that gathers information to help a student improve”.
Formal Assessment (give definition and an examples)
They are generally more “high-stakes” and have objective standardized administration and scoring. ex/ NORM ref tests, FCAT, SATs, etc
Informal Assessments (give def. and ex)
Less structured, more subjective than formal assessments. Clear connection to curriculum and instruction. ex. observation of student, teacher-made tests/quizzes, review of previous work.
Norm-Referenced tests
Compares a student to other students
What kind of impact has IDEA2004 and NCLB had on special education assessment?
It has increased accountability and placed important emphasis on appropriate ongoing assessment practices
What are the four purposes of special education assessment? (know each item in depth, check notes from 8/30)
Screening, Eligibility Determination, Program Planning, Progress Monitoring
What are the two primary questions that determine if a student is eligible for Special Education?
1. Does child have identified disability under IDEA?
2. Does disability manifest in a particular difficulty in school setting that special ed. services are necessary?
What is the purpose of the “Team Approach” to assessment?
Combines skills, knowledge, experience, and expertise to determine the best track for a student
What is the link b/w Assessment and IEP development?
You must constantly assess your student and compare your results with your goals in your IEP to determine effectiveness of your curricular content and instructional methods
SPED Assessment Process (3 steps) (know each in-depth 9/6 notes)
1. ID and referral 2. determination of eligibility 3. Program Planning
Discuss Parent Notification during ID and referral of student
Parents must be informed IN WRITING of the referral and provided a copy of their rights/Procedural safeguards. They have the right to be involved in and provide written consent for assessment.
How often must an IEP be updated
every year
How often are you required to redetermine eligibility for each student?
Every 3 years
What does IEP stand for?
Individualized Education Plan
What does IAP stand for?
Individualized Assessment Plan
Interval Scales of Measurement
Most commonly used scale in assessment. How classroom quizzes and tests are usually measured, as well as most standardized tests. Score starts an arbitrary point (usually zero on tests and quizzes)and go up at set intervals. (Like a test that is measured 1-100 point)
Descriptive Statistics
Summarizing all info/assessments in one place. (ex/ cent. tendency, variability, correlation)
Central Tendency
Describing a set or distribution of data with one index that represents the entire set (ex. mean, median, mode)
Variability
describes the spread or dispersion of a distribution (ex. range or standard deviation)
Correlation
expresses the degree of relationships b/w two sets of scores.
How do you evaluate the “norm”?
How do I evaluate the Appropriateness of test norms?
-age/grade/gender
-method of selection
-representativeness of the norm group
-size of the norm group
-recency of test norms
Reliability (definition)
consistency, or stability
Test-Retest (type of reliability)
correlation b/w 2 testing points for same student
Equivalent Forms (type of reliability)
Correlation b/w scores on different forms of the same test
Split-half/internal consistency
Test is divided in half, correlation between scores on each half for multiple students
Scorer or interrater/interobserver
responses rated by more than one observer
Validity
Does the test measure what it says it’s going to measure?
Content validity
are the questions related to the overall content
Criterion-related validity
How does it relate to other criteria?
Predictive validity (type of criterion-related validity)
How well does it predict future performance on related measures?
Concurrent validity (type of criterion-related validity)
How does it correlate with other measure with same content?
Measurement Error
observed score(X) = True score(T) + Error(E)
5 Main Areas of Assessment for Reading
1. Phonological Awareness
2. Decoding (Sight Words)
3. Fluency
4. Vocabulary
5. Comprehension
3 Main Areas of Assessment for Mathematics
1. Computation
2. Application
3. Fluency
3 Main Areas of Assessment for Written Expression
1. Word Level
2. Sentence-Level
3. Paragraph Level
4 Types of Scoring of Written Expression
1. Analytical
2. Holistic
3. Fluency
4. Common Measures
4 Essentials for Preparing to give an Assessment
1. Preparation
2. Environment
3. Materials
2.Rapport
KNOW How to label the basal and ceiling. Also, know how to calculate chronological age. (look at your notes from 9/20 and our “Sally Sample” packet)
List a few examples of ways tat assessments may need to be modified for a student with special needs
1. untimed
2. use of a calculator
3. let them use a word processor to write their answers
4. read them the questions.
Derived scores
allow for more accessible way of comparing student performance to norm group (peers).
Examples of assessments that used derived scores
standard scores, sealed scores, percentile ranks, age/grade equivalent,
Standard Scores
Transforms a raw score to a new scale based on the norm sample distribution. (typically, the mean = 100, and the standard dev is 15)
Leave a Reply
Your email address will not be published.
*
*

BACK TO TOP
x

Hi!
I'm Colin!

Would you like to get a custom essay? How about receiving a customized one?

Check it out