Transparency in Texas Tech Literacy for Student Assessments


Image: Based on results cited below.

A colleague recently challenged my thinking about the Texas Technology Literacy Assessment for 8th graders. As you may recall, I asked Texas technology directors to share the results of 2007-2008 8th grade students–who were assessed in 9th grade or Fall 2008) completing a technology literacy assessment.

The results are startling:


(Note: You can download the results (Excel format | OpenOffice format) and check my numbers at any time.)

The reason I asked for Texas districts to share their results was that I noticed a trend–students assessed with Infosource Learning’s SimpleAssessment.com scored SIGNIFICANTLY LOW. At first, I thought, “Wow, students in Texas don’t know much about technology use!” But considering slow adoption of technology into core content area, that’s not a surprise.

Then, someone made this observation–with slight edits on my part to anonymize the information–about their own Texas public school district’s results:

  1. My District’s students have not been taught the Tech Apps TEKS and yet over 60% passed the assessment.
  2. A passing score on this assessment was 70% but to get that 70% a student only needed to get 20 of the 40 items correct.
  3. A one hour discussion on the reliability and validity of the assessment with the vendor did not yield a positive reaction about the integrity of the assessment.
  4. We used Learning.com as our assessment since this is the same one TEA is using for its assessment pilot. Our cost was approximately $18,000.
  5. Infosource Learning’s SimpleAssessment.com was another alternative and it was free and their scores were much lower. I believe the lower scores are much closer to what our students really know.

So the questions that are hinted at in this data, as well as this observation, but not overtly stated…and which I’m going to try to state include the following:

  • How have each of these instruments been checked for validity/reliability?
  • Who at the commercial vendors (e.g. Learning.com) makes the decision about the weighting of certain test items over another? Have they been transparent about this?
  • How transparent is the Texas Education Agency in sharing the directions provided to the commercial vendor chosen for their technology literacy assessment pilot?
  • Does TEA plan to release results similar to those reflected in my simple survey above?
  • What are the usage statistics for Technology Applications:TEKS electronic materials?
  • There are obvious benefits to having students in Texas being perceived to score low (e.g. “Our scores are awful, we need more funding.”) but the converse is also true. The reaction might be this: “TEA, you’ve funnelled funding to schools for quite some time…and these are the results you have to show for it?”

    But HIGH scores–perhaps inflated, we don’t know–might also allow TEA to say, “See? We’ve invested in technology–for TA:TEKS Electronic Curriculum, Technology Immersion–for public schools and it’s starting to pay off in higher test scores.” Which is truer or is the truth in another quadrant of reality?

Since part of this discussion has taken place on the TCEA TEC-SIG list, a list composed of primarily technology directors, it is incumbent on the TCEA Membership to ask some additional questions:

  • What is TCEA doing–a la American Library Association (ALA)–to hold TEA accountable for test-assessment measures that almost all respondents to my survey found to be NOT VALID in their opinion?
  • What should TCEA do to advocate on behalf of school districts–to TEA and the State Legislature–regarding the assessment protocols put into place to ascertain the technology literacy of students who were 8th graders during the 2007-2008 school year?
  • What should public school district superintendents be encouraged to consider in regards to technology literacy at the upcoming TASA Midwinters’ Conference, and what is the TCEA Board of Directors doing about that?
  • What partnerships has TCEA developed that impinge on its objectivity in regards to state assessments, and has it disclosed those?

Some questions for the vendors of these assessments:

  • Have you published your weighting or grading scale for the assessments?
  • How do your assessments match the Technology Applications:TEKS electronic materials? How about the revised ISTE National Education Technology Standards for Students?
  • Will you be publishing an overview of all Texas–and perhaps other states as well–school district scores (how many 8th graders assessed, percent passing, etc.)?


Again, it is important to ask these questions of TEA, the commercial vendors and TCEA. The goal is not to put them on the spot, but rather, to ensure that everyone clearly understands what purpose of the assessments, how the assessments were implemented with what criteria, and how this has impacted the entire process of preparing children to meet NCLB Technology Literacy requirements.

If you are a Texas district and would like to participate in the survey–anonymously–please let me know.



Subscribe to Around the Corner-MGuhlin.net


Discover more from Another Think Coming

Subscribe to get the latest posts sent to your email.

14 comments

  1. So is it that Learning.com’s assessment is easier or does it just fit the mold of what today’s kids are doing? I assure you they are all too different to compare. Uh-oh. Standardized?Anyway, I know our kids are much more tech literate than the assessment shows, but that goes for the core class standardized tests as well. They need a product (or products) to be able to get an accurate assessment.

  2. So is it that Learning.com’s assessment is easier or does it just fit the mold of what today’s kids are doing? I assure you they are all too different to compare. Uh-oh. Standardized?Anyway, I know our kids are much more tech literate than the assessment shows, but that goes for the core class standardized tests as well. They need a product (or products) to be able to get an accurate assessment.

  3. We used the SimpleAssessment product because it was free. We found that the first time the kids took it, the all failed miserably. We began to look at the causes of their failure and found that the majority spent less than 10 minutes taking the 60 question test. Some even took less than 5 minutes. We shared the results with the principal and he wanted them retested after he had had a few words with them. the second time around the kids did better, but there was still some failures. We found that the SimpleAssessment test had two major problems for our kids: 1) It tested over Office 2003 instead of 2007 which we have already switched over to; and 2) It did not allow shortcuts, but wanted the full path to most of the tasks. Overall we thought it was a fair assessment of the kids tech savvy. Much better than we could have done ourselves.

  4. We used the SimpleAssessment product because it was free. We found that the first time the kids took it, the all failed miserably. We began to look at the causes of their failure and found that the majority spent less than 10 minutes taking the 60 question test. Some even took less than 5 minutes. We shared the results with the principal and he wanted them retested after he had had a few words with them. the second time around the kids did better, but there was still some failures. We found that the SimpleAssessment test had two major problems for our kids: 1) It tested over Office 2003 instead of 2007 which we have already switched over to; and 2) It did not allow shortcuts, but wanted the full path to most of the tasks. Overall we thought it was a fair assessment of the kids tech savvy. Much better than we could have done ourselves.

  5. Boy, this subject can really get me going.No one should be surprised about the low performance levels. What real effort statewide have we put into the system to insure high performance levels.The core subjects they have been refining their test materials, teaching methods, and staff development materials for 11 years trying to get better classroom instruction. Reading Academies, Math Academies, all types of benchmarks and assessments. Online staff development tools. Continued reassessment of the testing materials themselves. And every year they still have to scramble to get better.Somehow technology is suppose to obtain great results with no effort. Publish the Teks, buy some computers, buy the technology textbooks and everything magically falls into place. You don’t actually have to do anything because it is all integrated. Reading has spent the last 10 years developing a common vocabulary on reading proficiency and standards so that student success can be measured. For Technology we ask teachers to fill out a self assessment with no clear standards. We still have teachers rate themselves technology advanced because they use email and their kids take AR Tests. How will that ever result in a Target Tech proficiency level. It will be interesting to see what happens with the reporting of the Test Results and the earlier Online Testing Survey. Especially with a new legislative session. Both should illuminate major weaknesses in the current structure, from funding to staff development and to understanding what Technology Literate should mean.

  6. Boy, this subject can really get me going.No one should be surprised about the low performance levels. What real effort statewide have we put into the system to insure high performance levels.The core subjects they have been refining their test materials, teaching methods, and staff development materials for 11 years trying to get better classroom instruction. Reading Academies, Math Academies, all types of benchmarks and assessments. Online staff development tools. Continued reassessment of the testing materials themselves. And every year they still have to scramble to get better.Somehow technology is suppose to obtain great results with no effort. Publish the Teks, buy some computers, buy the technology textbooks and everything magically falls into place. You don’t actually have to do anything because it is all integrated. Reading has spent the last 10 years developing a common vocabulary on reading proficiency and standards so that student success can be measured. For Technology we ask teachers to fill out a self assessment with no clear standards. We still have teachers rate themselves technology advanced because they use email and their kids take AR Tests. How will that ever result in a Target Tech proficiency level. It will be interesting to see what happens with the reporting of the Test Results and the earlier Online Testing Survey. Especially with a new legislative session. Both should illuminate major weaknesses in the current structure, from funding to staff development and to understanding what Technology Literate should mean.

  7. Have you looked at Simple Assessment questions? They are not aligned with the TA TEKS at all. Many of the questions refer to email, social networking, web 2.0 tools and other topics that many districts in Texas block or do not let students use. If students do use them, they are not taught to be a good digital citizen in these topics. Most Texas districts play ostrich with their head in the sand.Therefore, it is not surprising that the scores are lower on Simple Assessment. Again, I am not sure how valid a multi- choose question is in determining technological literacy.

  8. Have you looked at Simple Assessment questions? They are not aligned with the TA TEKS at all. Many of the questions refer to email, social networking, web 2.0 tools and other topics that many districts in Texas block or do not let students use. If students do use them, they are not taught to be a good digital citizen in these topics. Most Texas districts play ostrich with their head in the sand.Therefore, it is not surprising that the scores are lower on Simple Assessment. Again, I am not sure how valid a multi- choose question is in determining technological literacy.

  9. We did the same SimpleAssessment in November & used a 10 point curve (as decided by committee when we used same assessment on all teachers, librarians & campus admins in October). Our students also did poorly, 11% passing, but after research, we know why. We are now in the process of trying to revamp our MS Tech Apps offerings. I personally ran the 9th grade students through a lab over 3 days, so I got to see what was happening. They were not goofing around, they really did not know their stuff. Many of them were genuinely horrified by their scores.After the first class, I had to tell them that their names and scores would not be reported, that only summary data was being sent in. I did not want them to take such a big hit on their self-esteem, when we had not taught them what they needed to know. It was a good wake up call

  10. We did the same SimpleAssessment in November & used a 10 point curve (as decided by committee when we used same assessment on all teachers, librarians & campus admins in October). Our students also did poorly, 11% passing, but after research, we know why. We are now in the process of trying to revamp our MS Tech Apps offerings. I personally ran the 9th grade students through a lab over 3 days, so I got to see what was happening. They were not goofing around, they really did not know their stuff. Many of them were genuinely horrified by their scores.After the first class, I had to tell them that their names and scores would not be reported, that only summary data was being sent in. I did not want them to take such a big hit on their self-esteem, when we had not taught them what they needed to know. It was a good wake up call

  11. Very interesting post.We used the SimpleAssessment for a group of middle school students a couple of weeks ago, and almost nobody passed. I walked away from the experience very confused. Going into the assessment, I felt that there were gaps in our students’ knowledge, but I’m not sure if this tool did anything to really identify the gaps. The reporting was so thin and evaluative, there’s nothing there to really inform any instruction.

  12. Very interesting post.We used the SimpleAssessment for a group of middle school students a couple of weeks ago, and almost nobody passed. I walked away from the experience very confused. Going into the assessment, I felt that there were gaps in our students’ knowledge, but I’m not sure if this tool did anything to really identify the gaps. The reporting was so thin and evaluative, there’s nothing there to really inform any instruction.

  13. Learning.com is glad that these important questions have been raised about the assessment of tech literacy, and would like to help by answering the questions below as they relate to our TechLiteracy Assessment.1. “How have each of these instruments been checked for validity/reliability?”Third-party psychometric validationTechLiteracy Assessment uses questions that were validated by established third party psychometricians following national beta testing to ensure accurate, usable reporting data. The questions are a mix of performance based items that use simulated software with realistic choices and often multiple correct answers to enable students to authentically show they can complete a complex task, and multiple choice, knowledge-based questions using text and often graphical examples. Appropriate reading levelsTo ensure that we are testing technology literacy rather than English reading skills, the Elementary School version is written at a third grade reading level. The Middle School version of TechLiteracy Assessment is written at a sixth grade reading level. Each item assesses students’ skill level on durable concepts and strategies that extend beyond specific brands of software, requiring students to demonstrate adaptable, generalized technology skills.Proficiency standardsTechLiteracy Assessment was designed to measure student proficiency in technology skills and knowledge. To test this, it was first necessary to define proficiency at both the elementary and middle school levels. The proficiency benchmarks for students were created after conducting an exhaustive survey of state and national technology standards, then reviewed by experts in standards to see what students need to know to be successful. These standards were used to determine the nationally prevalent skill and knowledge expectations and requirements for elementary and for middle school students. When standards serve as educational goals, they often need to be revised into statements of achievement before they can be measured. This requires breaking standards down into component parts and linking them to specific actions. For example, a standard requiring students “to understand software menus” can best be assessed by asking the student to perform a task that requires use of software menus. TechLiteracy Assessment items were written to assess student ability in these standards. Items were then tested with students in field studies in different states and among different demographic populations. The prevalent standards, items, and student performance data were then scrutinized by a qualified national panel of technology instruction experts with classroom, district level, and academic research experience. This panel, in conjunction with expert psychometricians, examined the data and made two determinations. First, they confirmed that TechLiteracy Assessment does effectively measure grade appropriate student skills and knowledge in technology. Second, they determined where the bar for proficiency in technology literacy should lie for the elementary and for the middle school national student populations. This determined the Proficiency Standard used in TechLiteracy Assessment. Ongoing psychometric reviewEach item and each test form (for the pre test and the post test for 5th grade, and the pretest and the posttest for 8th grade) are examined anew after every testing window, and are measured by a staff of highly experienced and qualified psychometricians to ensure that student answers and abilities are measured accurately against the stated benchmarks. Preservation of scoring validityTo secure the validity of the assessment, customers are not able to change what questions appear on the assessment, or make changes to scoring. Student results can be directly compared across pre and post tests, year to year, as well as classes, schools districts, and nationally using the same psychometrically valid scoring. The reports include the national averages for comparison purposes. 2. “Who at the commercial vendors (e.g. Learning.com) makes the decision about the weighting of certain test items over another? Have they been transparent about this?”On TechLiteracy Assessment, no items are weighted over another. The scale score indicates proficiency and are the only scores that are comparable to each other from one test to the next, not point values per item. While the number of points is calculated without numerical weighting, each new assessment has questions of varying difficulty. The combination of items for each new test is analyzed by psychometricians using Item Response Theory to determine the test characteristic curve which sets the new cutoff. The scaled passing score will always be 220; however, depending on the number of correct answers needed to obtain that score, the number of points each question is worth will vary from form to form. This means that if the test characteristic curve indicates that the combination of items has a higher level of difficulty than before, less items will need to be correct to show proficiency and more items will have their points averaged to fit within the remaining 80 points on the scale score. Or, if the analysis has determined that the items were less difficult than before, more items must be correct to achieve proficiency, and fewer items are left to be averaged into the remaining points. Points per item are typically different on either side of the cut mark and are determined by psychometric analysis of each new test form.3. “How transparent is the Texas Education Agency in sharing the directions provided to the commercial vendor chosen for their technology literacy assessment pilot?”I believe you will find answers to questions 3, 4, and 5 and 8 in the TEA’s “Progress Report on the Long-Range Plan for Technology, 2006-2020” which can be found at:http://www.learning.com/states/pdf/TEA-Progress-Report-Long-Range-Tech-Plan.pdf4. “Does TEA plan to release results similar to those reflected in my simple survey above?”(see answer 3)5. “What are the usage statistics for Technology Applications:TEKS electronic materials?” (see answer 3)6. “There are obvious benefits to having students in Texas being perceived to score low (e.g. “”Our scores are awful, we need more funding.””) but the converse is also true. The reaction might be this: “”TEA, you’ve funnelled funding to schools for quite some time…and these are the results you have to show for it?”” “But HIGH scores–perhaps inflated, we don’t know–might also allow TEA to say, “”See? We’ve invested in technology–for TA:TEKS Electronic Curriculum, Technology Immersion–for public schools and it’s starting to pay off in higher test scores.”” Which is truer or is the truth in another quadrant of reality?” TechLiteracy Assessment’s Proficiency Standard for fifth and Proficiency Standard for eighth grade were set by the standard setting panel and enables assessment scores to be compared one to one across the nation and over time. Authentic assessment of software skills using simulations with multiple correct answers accurately demonstrate student ability where memory based questions cannot. TechLiteracy Assessment is an age appropriate criterion-referenced assessment. The assessment is limited to 47 questions to prevent potential fatigue factor from influencing student performance. Different pre and post tests enable meaningful reporting at year’s end. Psychometric validation ensures the accuracy of the assessment.7. “Have you published your weighting or grading scale for the assessments?” The scale for the assessments is published on the customers’ reports along with comparative data that enables customers to compare the proficiency of their students with national results. TechLiteracy Assessment does not use weighting. The score begins at 100 to prevent confusion with percentage scores of 0 to 100%. It extends from 100 to 300 to prevent confusion resulting from trying to draw inaccurate relationships with other, unrelated assessments. Minimal proficiency is indicated when the student achieves 220 points.8. “How do your assessments match the Technology Applications:TEKS electronic materials? How about the revised ISTE National Education Technology Standards for Students?” TechLiteracy Assessment aligns to the Texas TEKS-TA and was created in collaboration with educators from Austin, TX. The assessment is also aligned to the NETS-2007. In addition, Learning.com also offers 21st Century Skills Assessment, which directly reports to the NETS-S 2007 standards.9. “Will you be publishing an overview of all Texas–and perhaps other states as well–school district scores (how many 8th graders assessed, percent passing, etc.)?” (see answer 3)Thank you for the opportunity to contribute to this dialogue.Michael HarrisProduct Manager for TechLiteracy, Learning.com

  14. Learning.com is glad that these important questions have been raised about the assessment of tech literacy, and would like to help by answering the questions below as they relate to our TechLiteracy Assessment.1. “How have each of these instruments been checked for validity/reliability?”Third-party psychometric validationTechLiteracy Assessment uses questions that were validated by established third party psychometricians following national beta testing to ensure accurate, usable reporting data. The questions are a mix of performance based items that use simulated software with realistic choices and often multiple correct answers to enable students to authentically show they can complete a complex task, and multiple choice, knowledge-based questions using text and often graphical examples. Appropriate reading levelsTo ensure that we are testing technology literacy rather than English reading skills, the Elementary School version is written at a third grade reading level. The Middle School version of TechLiteracy Assessment is written at a sixth grade reading level. Each item assesses students’ skill level on durable concepts and strategies that extend beyond specific brands of software, requiring students to demonstrate adaptable, generalized technology skills.Proficiency standardsTechLiteracy Assessment was designed to measure student proficiency in technology skills and knowledge. To test this, it was first necessary to define proficiency at both the elementary and middle school levels. The proficiency benchmarks for students were created after conducting an exhaustive survey of state and national technology standards, then reviewed by experts in standards to see what students need to know to be successful. These standards were used to determine the nationally prevalent skill and knowledge expectations and requirements for elementary and for middle school students. When standards serve as educational goals, they often need to be revised into statements of achievement before they can be measured. This requires breaking standards down into component parts and linking them to specific actions. For example, a standard requiring students “to understand software menus” can best be assessed by asking the student to perform a task that requires use of software menus. TechLiteracy Assessment items were written to assess student ability in these standards. Items were then tested with students in field studies in different states and among different demographic populations. The prevalent standards, items, and student performance data were then scrutinized by a qualified national panel of technology instruction experts with classroom, district level, and academic research experience. This panel, in conjunction with expert psychometricians, examined the data and made two determinations. First, they confirmed that TechLiteracy Assessment does effectively measure grade appropriate student skills and knowledge in technology. Second, they determined where the bar for proficiency in technology literacy should lie for the elementary and for the middle school national student populations. This determined the Proficiency Standard used in TechLiteracy Assessment. Ongoing psychometric reviewEach item and each test form (for the pre test and the post test for 5th grade, and the pretest and the posttest for 8th grade) are examined anew after every testing window, and are measured by a staff of highly experienced and qualified psychometricians to ensure that student answers and abilities are measured accurately against the stated benchmarks. Preservation of scoring validityTo secure the validity of the assessment, customers are not able to change what questions appear on the assessment, or make changes to scoring. Student results can be directly compared across pre and post tests, year to year, as well as classes, schools districts, and nationally using the same psychometrically valid scoring. The reports include the national averages for comparison purposes. 2. “Who at the commercial vendors (e.g. Learning.com) makes the decision about the weighting of certain test items over another? Have they been transparent about this?”On TechLiteracy Assessment, no items are weighted over another. The scale score indicates proficiency and are the only scores that are comparable to each other from one test to the next, not point values per item. While the number of points is calculated without numerical weighting, each new assessment has questions of varying difficulty. The combination of items for each new test is analyzed by psychometricians using Item Response Theory to determine the test characteristic curve which sets the new cutoff. The scaled passing score will always be 220; however, depending on the number of correct answers needed to obtain that score, the number of points each question is worth will vary from form to form. This means that if the test characteristic curve indicates that the combination of items has a higher level of difficulty than before, less items will need to be correct to show proficiency and more items will have their points averaged to fit within the remaining 80 points on the scale score. Or, if the analysis has determined that the items were less difficult than before, more items must be correct to achieve proficiency, and fewer items are left to be averaged into the remaining points. Points per item are typically different on either side of the cut mark and are determined by psychometric analysis of each new test form.3. “How transparent is the Texas Education Agency in sharing the directions provided to the commercial vendor chosen for their technology literacy assessment pilot?”I believe you will find answers to questions 3, 4, and 5 and 8 in the TEA’s “Progress Report on the Long-Range Plan for Technology, 2006-2020” which can be found at:http://www.learning.com/states/pdf/TEA-Progress-Report-Long-Range-Tech-Plan.pdf4. “Does TEA plan to release results similar to those reflected in my simple survey above?”(see answer 3)5. “What are the usage statistics for Technology Applications:TEKS electronic materials?” (see answer 3)6. “There are obvious benefits to having students in Texas being perceived to score low (e.g. “”Our scores are awful, we need more funding.””) but the converse is also true. The reaction might be this: “”TEA, you’ve funnelled funding to schools for quite some time…and these are the results you have to show for it?”” “But HIGH scores–perhaps inflated, we don’t know–might also allow TEA to say, “”See? We’ve invested in technology–for TA:TEKS Electronic Curriculum, Technology Immersion–for public schools and it’s starting to pay off in higher test scores.”” Which is truer or is the truth in another quadrant of reality?” TechLiteracy Assessment’s Proficiency Standard for fifth and Proficiency Standard for eighth grade were set by the standard setting panel and enables assessment scores to be compared one to one across the nation and over time. Authentic assessment of software skills using simulations with multiple correct answers accurately demonstrate student ability where memory based questions cannot. TechLiteracy Assessment is an age appropriate criterion-referenced assessment. The assessment is limited to 47 questions to prevent potential fatigue factor from influencing student performance. Different pre and post tests enable meaningful reporting at year’s end. Psychometric validation ensures the accuracy of the assessment.7. “Have you published your weighting or grading scale for the assessments?” The scale for the assessments is published on the customers’ reports along with comparative data that enables customers to compare the proficiency of their students with national results. TechLiteracy Assessment does not use weighting. The score begins at 100 to prevent confusion with percentage scores of 0 to 100%. It extends from 100 to 300 to prevent confusion resulting from trying to draw inaccurate relationships with other, unrelated assessments. Minimal proficiency is indicated when the student achieves 220 points.8. “How do your assessments match the Technology Applications:TEKS electronic materials? How about the revised ISTE National Education Technology Standards for Students?” TechLiteracy Assessment aligns to the Texas TEKS-TA and was created in collaboration with educators from Austin, TX. The assessment is also aligned to the NETS-2007. In addition, Learning.com also offers 21st Century Skills Assessment, which directly reports to the NETS-S 2007 standards.9. “Will you be publishing an overview of all Texas–and perhaps other states as well–school district scores (how many 8th graders assessed, percent passing, etc.)?” (see answer 3)Thank you for the opportunity to contribute to this dialogue.Michael HarrisProduct Manager for TechLiteracy, Learning.com

Leave a reply to Andrew Kohl Cancel reply