Texas Tech Literacy for Students – Learning.com’s Response

In response to this blog entry on Transparency in Texas Technology Literacy for Student Assessments, Michael Harris (Learning.com) shared the following responses to questions posed in the original entry…those questions included the following:

  • Have you published your weighting or grading scale for the assessments?
  • How do your assessments match the Technology Applications:TEKS electronic materials? How about the revised ISTE National Education Technology Standards for Students?
  • Will you be publishing an overview of all Texas–and perhaps other states as well–school district scores (how many 8th graders assessed, percent passing, etc.)?

Again, it is important to ask these questions of TEA, the commercial vendors and TCEA. The goal is to ensure that everyone clearly understands what is the purpose of the assessments, how the assessments were implemented with what criteria, and how this has impacted the entire process of preparing children to meet NCLB Technology Literacy requirements.

Note: Michael was kind enough to post the following as a comment but I thought it should be reposted as a separate blog entry to reach more folks….

Michael Harris’ (Learning.com) comment:

Learning.com is glad that these important questions have been raised about the assessment of tech literacy, and would like to help by answering the questions below as they relate to our TechLiteracy Assessment.

1. “How have each of these instruments been checked for validity/reliability?”

Third-party psychometric validation
TechLiteracy Assessment uses questions that were validated by established third party psychometricians following national beta testing to ensure accurate, usable reporting data. The questions are a mix of performance based items that use simulated software with realistic choices and often multiple correct answers to enable students to authentically show they can complete a complex task, and multiple choice, knowledge-based questions using text and often graphical examples.

Appropriate reading levels
To ensure that we are testing technology literacy rather than English reading skills, the Elementary School version is written at a third grade reading level. The Middle School version of TechLiteracy Assessment is written at a sixth grade reading level. Each item assesses students’ skill level on durable concepts and strategies that extend beyond specific brands of software, requiring students to demonstrate adaptable, generalized technology skills.

Proficiency standards
TechLiteracy Assessment was designed to measure student proficiency in technology skills and knowledge. To test this, it was first necessary to define proficiency at both the elementary and middle school levels. The proficiency benchmarks for students were created after conducting an exhaustive survey of state and national technology standards, then reviewed by experts in standards to see what students need to know to be successful. These standards were used to determine the nationally prevalent skill and knowledge expectations and requirements for elementary and for middle school students.

When standards serve as educational goals, they often need to be revised into statements of achievement before they can be measured. This requires breaking standards down into component parts and linking them to specific actions. For example, a standard requiring students “to understand software menus” can best be assessed by asking the student to perform a task that requires use of software menus. TechLiteracy Assessment items were written to assess student ability in these standards. Items were then tested with students in field studies in different states and among different demographic populations.

The prevalent standards, items, and student performance data were then scrutinized by a qualified national panel of technology instruction experts with classroom, district level, and academic research experience. This panel, in conjunction with expert psychometricians, examined the data and made two determinations. First, they confirmed that TechLiteracy Assessment does effectively measure grade appropriate student skills and knowledge in technology. Second, they determined where the bar for proficiency in technology literacy should lie for the elementary and for the middle school national student populations. This determined the Proficiency Standard used in TechLiteracy Assessment.

Ongoing psychometric review
Each item and each test form (for the pre test and the post test for 5th grade, and the pretest and the posttest for 8th grade) are examined anew after every testing window, and are measured by a staff of highly experienced and qualified psychometricians to ensure that student answers and abilities are measured accurately against the stated benchmarks.

Preservation of scoring validity
To secure the validity of the assessment, customers are not able to change what questions appear on the assessment, or make changes to scoring. Student results can be directly compared across pre and post tests, year to year, as well as classes, schools districts, and nationally using the same psychometrically valid scoring. The reports include the national averages for comparison purposes.

2. “Who at the commercial vendors (e.g. Learning.com) makes the decision about the weighting of certain test items over another? Have they been transparent about this?”

On TechLiteracy Assessment, no items are weighted over another. The scale score indicates proficiency and are the only scores that are comparable to each other from one test to the next, not point values per item. While the number of points is calculated without numerical weighting, each new assessment has questions of varying difficulty. The combination of items for each new test is analyzed by psychometricians using Item Response Theory to determine the test characteristic curve which sets the new cutoff. The scaled passing score will always be 220; however, depending on the number of correct answers needed to obtain that score, the number of points each question is worth will vary from form to form.

This means that if the test characteristic curve indicates that the combination of items has a higher level of difficulty than before, less items will need to be correct to show proficiency and more items will have their points averaged to fit within the remaining 80 points on the scale score. Or, if the analysis has determined that the items were less difficult than before, more items must be correct to achieve proficiency, and fewer items are left to be averaged into the remaining points. Points per item are typically different on either side of the cut mark and are determined by psychometric analysis of each new test form.

3. “How transparent is the Texas Education Agency in sharing the directions provided to the commercial vendor chosen for their technology literacy assessment pilot?”

I believe you will find answers to questions 3, 4, and 5 and 8 in the TEA’s “Progress Report on the Long-Range Plan for Technology, 2006-2020” which can be found at:

http://www.learning.com/states/pdf/TEA-Progress-Report-Long-Range-Tech-Plan.pdf

4. “Does TEA plan to release results similar to those reflected in my simple survey above?”

(see answer 3)

5. “What are the usage statistics for Technology Applications:TEKS electronic materials?”

(see answer 3)

6. “There are obvious benefits to having students in Texas being perceived to score low (e.g. “”Our scores are awful, we need more funding.””) but the converse is also true. The reaction might be this: “”TEA, you’ve funnelled funding to schools for quite some time…and these are the results you have to show for it?””

“But HIGH scores–perhaps inflated, we don’t know–might also allow TEA to say, “”See? We’ve invested in technology–for TA:TEKS Electronic Curriculum, Technology Immersion–for public schools and it’s starting to pay off in higher test scores.”” Which is truer or is the truth in another quadrant of reality?”

TechLiteracy Assessment’s Proficiency Standard for fifth and Proficiency Standard for eighth grade were set by the standard setting panel and enables assessment scores to be compared one to one across the nation and over time. Authentic assessment of software skills using simulations with multiple correct answers accurately demonstrate student ability where memory based questions cannot. TechLiteracy Assessment is an age appropriate criterion-referenced assessment. The assessment is limited to 47 questions to prevent potential fatigue factor from influencing student performance. Different pre and post tests enable meaningful reporting at year’s end. Psychometric validation ensures the accuracy of the assessment.

7. “Have you published your weighting or grading scale for the assessments?”

The scale for the assessments is published on the customers’ reports along with comparative data that enables customers to compare the proficiency of their students with national results. TechLiteracy Assessment does not use weighting. The score begins at 100 to prevent confusion with percentage scores of 0 to 100%. It extends from 100 to 300 to prevent confusion resulting from trying to draw inaccurate relationships with other, unrelated assessments. Minimal proficiency is indicated when the student achieves 220 points.

8. “How do your assessments match the Technology Applications:TEKS electronic materials? How about the revised ISTE National Education Technology Standards for Students?”

TechLiteracy Assessment aligns to the Texas TEKS-TA and was created in collaboration with educators from Austin, TX. The assessment is also aligned to the NETS-2007. In addition, Learning.com also offers 21st Century Skills Assessment, which directly reports to the NETS-S 2007 standards.

9. “Will you be publishing an overview of all Texas–and perhaps other states as well–school district scores (how many 8th graders assessed, percent passing, etc.)?”

(see answer 3)

Thank you for the opportunity to contribute to this dialogue.

Michael Harris
Product Manager for TechLiteracy, Learning.com


var addthis_pub=”mguhlin”;


Subscribe to Around the Corner-MGuhlin.org


Be sure to visit the ShareMore! Wiki.


Everything posted on Miguel Guhlin’s blogs/wikis are his personal opinion and do not necessarily represent the views of his employer(s) or its clients. Read Full Disclosure


Discover more from Another Think Coming

Subscribe to get the latest posts sent to your email.

Leave a comment