The terms, conditions and specifications provided in the following Schedules are incorporated into and made part of the Caveon Technology and Internet-Based Services Subscriber Agreement and Order Form (the “Agreement”) for each and every Caveon customer that has entered into an Agreement with Caveon where the Order Form provides that the customer is licensing the Technology identified in the applicable Schedule. The Schedules are subject to all other terms and conditions set forth in the Agreement and do not create any obligations for Caveon or convey any rights to any other party in Caveon’s Technology in the absence of a fully executed Agreement with Caveon that makes reference to them.
SCHEDULE A
SmartItem Patent
Title: Systems and Methods for Testing Skills Capability Using Technologically Enhanced Questions in a Computerized Environment
Application No.: 16/217,614
Filing Date: December 12, 2018
SCHEDULE B
DOMC Patent
Title: PRESENTING ANSWER OPTIONS TO MULTIPLE-CHOICE QUESTIONS DURING ADMINISTRATION OF A COMPUTERIZED TEST
Patent Number: US 7,513,775
Issue Date: April 7, 2009
SCHEDULE C
SMARTITEM SYSTEM AND DATA REQUIREMENTS, AND QUALITY ASSURANCE AND CONTROL STANDARDS
Note: This document treats these terms as synonymous: skill, ability, proficiency, competency, standard (as in Common Core Standards), learning objective, performance objective, and others that are similar.
Caveon SmartItems™ can take many forms and item types, but all SmartItems™ share the following characteristics:
SmartItems™ are designed to measure all of a competency, objective, skill or ability, rather than a part or slice, from the smallest of skills up to and including an entire performance domain.
For example, if an objective is “the student can calculate two-digit multiplication correctly,” a single SmartItem™ can be designed to cover the entire universe of possible two-digit multiplication combinations. A SmartItem™ can also cover a very large content domain, say, the entire content of a college course.
This contrasts with traditional items, where multiple individual items (5×1, 4×5, …) would comprise a bank of items to be individually selected for administration to a candidate.
As a non-mathematical example, a competency domain could be stated as “Describe how the U.S. constitutional amendments protect civil liberties,” and again, a single SmartItem™ would be designed to cover the entire population of ways that the amendments protect civil liberties.
SmartItems™ are designed to cover an identified competency or skill with as many item variations as possible. It is not unusual for a SmartItem™ to produce, as a potential, tens of thousands, hundreds of thousands, or millions of item variations.
The quality of the SmartItem™ is dependent to a great degree on the quality of the skill description. Using SmartItems™ will require a testing program to become more precise in the authoring of the descriptions.
SmartItems™ are designed to present different item “variations” randomly to each test taker, where an item variation is a specific set of variable elements. Variable elements can be created by code, or by unique combinations of response options.
SmartItems™ are designed with the goal of minimizing the number of total items a program has to manage (for example, single SmartItems™ can represent entire competencies) in an item bank, as opposed to expanding an item bank by creating and storing individual item variations as actual discrete static items.
SmartItems™ are evaluated psychometrically at the “SmartItem™” level as opposed to the item variation level, where there may not be enough information to evaluate the each variation’s psychometric properties.
User-Facing SmartItem™ Item Required Properties:
Item content is presented as appropriate for the item type selected
If a program has “mark and review” functionality enabled, or if an item is abandoned without being completed, it may be revisited to the extent that program specifications and item type selection allows.
Once an item has been completed, it may or may not be revisited, depending on the program specifications.
SmartItem™ Writing Required Properties:
Using code, stems can vary, depending on the type and style of SmartItem™ being developed.
For selected response items, there is no limit to the number of correct options that can be produced.
For selected response items, there is no limit to the number of incorrect options that can be produced.
For selected response items, the item writer must configure, or approve the defaults of, the presentation set (actual number of correct/incorrect options to potentially display).
For selected response items, the item writer must configure, or approve the defaults of, the scoring rules for the item.
For constructed response items (e.g., essay, short-answer) the item writer must approve the scoring rules for each variation of the item.
SmartItem™ Data Storage Required Properties:
*Note: Data storage requirements will vary by item type. In general, all possible data, including individually identifying item characteristics should be stored for each item type. This list is not exhaustive.
Test taker ID
Time/Day of test
Start/Stop times for the test
ItemID from pool/bank/test
Stem Presented to examinee
Option(s) presented to examinee
OptionID for option presented
Ordinal Position of each option presented
Option Response, including response changes
Option Key
Option Score
Item Score
Latency/Read/Response time for the lowest possible level of data to be collected for the specific item type
Latency for the test
Item Presentation Specifications:
SmartItem™ selected by driver (SmartItems™ can be used with random/fixed linear, adaptive, LOFT or any other item selection/presentation method or test design).
SCHEDULE D
DOMC SYSTEM AND DATA REQUIREMENTS, AND QUALITY ASSURANCE AND CONTROL STANDARDS
User Facing DOMC Item Required Properties:
Stem is presented first.
Options are presented one at a time after a user indicates that he or she is ready to view options.
Users answer Yes or No to each option as it is presented until the question is scored based on designated rules.
Once an option has been answered, it may not be revisited.
If a program has “mark and review” functionality enabled, If a question is abandoned without being completed, it may be revisited but only to the extent that the previously responded to options are “locked” and may not be revisited.
Once an item has been completed, it may not be revisited.
DOMC Item Writing Required Properties:
One stem is written, in any format and length, likely ending in a form similar to “is this an appropriate response/answer”.
The default item writing template will present four “slots” in which to write correct/incorrect options.
There is no limit to the number of correct options written.
There is no limit to the number of incorrect options written
The item writer must configure, or approve the defaults of, the presentation set (actual number of correct/incorrect options to potentially display).
The item writer must configure, or approve the defaults of, the scoring rules for the item.
The item writer can change the probability of presenting an extra, unscored option to break up candidate feedback, or accept the default probability of .5.
Default values for the Presentation Set and Scoring Rules are set by the system in advance. (a note about default values – This section is currently under review and development and is subject to change. This reflects the current implementation of Caveon’s DOMC systems. Caveon is evaluating best practices and recommendations for recommended defaults based on item composition and will update specification guidance based on further implementation)Default presentation set and scoring rules are as follows:
Default presentation set is the option pool (present all correct and all incorrect)
Default scoring rules to get an item correct require:
the correct endorsement (yes) to 1 correct response
there is no default setting to get a question correct based on correct endorsement of incorrect options
Default scoring rules to get an item incorrect require:
the incorrect endorsement (no) to 1 correct response
the incorrect endorsement (yes) to 1 incorrect response
DOMC Data Storage Required Properties:
Test taker ID
Time/Day of test
Start/Stop times for the test
ItemID from pool/bank/test
OptionID for option presented
Ordinal Position of each option presented
Each option response endorsement (yes/no)
Option Key
Option Score
Item Score
Latency/Read time for the stem prior to “show me an option” button click
Latency/Response time for each option
Whether option is a random additional option (RAO – Unscored)
Latency for the test
(in addition to typical data stored for multiple choice questions)
DOMC Presentation Development Required Properties:
Individual items/options adhere to unique scoring/stopping rules per item/option
Presentation set, specified in general by the item writer, created when item is called for by test driver
DOMC Test Administration Required Properties:
Disable Mark and Review capability when DOMC items are on a test.
If this functionality is absolutely required by a client, each DOMC item must be “bookmarked” meaning that the candidate will return to a DOMC item in the state in which it was left. No option previously seen can be revisited.
Item Presentation Specifications:
DOMC Item selected by driver (can be random, fixed linear, adaptive, or any other item selection method)
Presentation Set Created – This step occurs each time a DOMC item is selected, providing a potentially different presentation set for each test taker.
Based on presentation set specifications, randomly select the presentation set of correct and incorrect options from the option pools.
Logically, the number of correct and incorrect options defined by the presentation set must be equal to or fewer than the correct and incorrect options in the option pools.
Stem Presented
Candidate must click “show me an option” button to get the first option.
Answer randomly presented option with Yes and No buttons.
Answer Endorsed (clicking on Yes button)/Not Endorsed (clicking on No button)
Endorsed/Not Endorsed answer compared with scoring rules. If rules satisfied, then score item and proceed to 6.
If scoring rules are not yet satisfied, return to 4 and continue presenting options, and collecting responses, until scoring rules are satisfied, score item, and proceed to 6.
Random additional option (RAO) presented based on RAO probability
Using probability function, calculate whether an RAO is presented.
If no random option is presented – END ITEM and move next
If random option is to be presented, evaluate whether there are enough options remaining in presentation set to present an additional option.
If no – END ITEM and move to the next one.
If yes – Proceed to 7
RAO presented and answered
Record RAO data without regard for scoring. Score has already been recorded in 5. END ITEM and move to next item
Figure 1. Presentation Specification Diagram
Item Scoring Specifications:
An item is completed when enough information has been supplied to score the item, and perhaps an RAO has been presented, and move to the next item if the test has not completed.
A random additional option (RAO) may be presented at a designated probability. It is unscored, and its purpose is to break up feedback from responding to previous options.
There are 4 scoring outcomes:
The candidate has answered enough correct options correctly (with 1 or more Yes responses) pursuant to the scoring rules; Score = 1.
The candidate has answered enough incorrect options correctly (with 1 or more No responses) pursuant to the scoring rules; Score = 1.
The examinee has answered 1 or more incorrect options with a Yes response pursuant to the designated scoring rules for the item; Score = 0.
The examinee has answered 1 or more correct options with a No response pursuant to the designated scoring rules for the item; Score = 0.
Scoring rules need to be applied after the first option has been presented and answered, and then after each subsequent option presentation and response.
Logically, the scoring requirements for one of the 4 possible outcomes must have been satisfied before or upon the answer of the last possible option in the presentation set.
Figure 2. Item Scoring Specification Diagram
Item Development Specifications:
Writers must have the ability to produce as many correct and incorrect options as needed – system should not restrict the number of each type of option.
Writers must have the ability and have the responsibility to set scoring rules and the makeup of the Presentation Set.
# of correct answers to display; default all
# of incorrect options to display; default all
# of correct options answered with YES; Score = 1; default 1
# of incorrect options answered with NO; Score = 1; default –*
# of correct options answered with a NO; Score = 0; default 1
# of incorrect options answered with a YES; Score = 0; default 1
RAO probability setting; range from 0-1; default.
* Indicates that no scoring is used and candidates must fulfill correct answer option criteria to earn a score.
Make-up of the Option Pool:
Q: How many options, both Correct and Incorrect should be in the option pool?
A: You should create as many as are needed to answer the question in a plausible, practical and realistic way. Some incorrect options especially those with minimal plausibility are the most difficult to author, and these should be avoided. Options should be authored that make sense when either Correct or Incorrect. Such options should be able to be authored easily by a subject matter expert.
Exposure of options is a reason to have a large number of options. The more options, correct or incorrect, the better the security effect as options are exposed infrequently.
Remember, too, that options can be reworded versions of each other: “The 1st president of the United States” may be equally correct (or incorrect) to “George Washington.” If these were both correct options, the Presentation Set would likely be set to only randomly select one of them.
Make-up of the Presentation Set:
Q: How does an item writer decide on how many correct and incorrect options to include in the Presentation Set?
A: Research has shown that test takers see and respond to an average of 2.5 options, even with the RAO used at .5 probability. It doesn’t make a lot of sense, most of the time, to include more than 5 options in the Presentation Set, as most items would end at least by the 4th or 5th option, and probably before. So, as a rule of thumb, put in the Presentation Set no more than 4 or 5 options, in a reasonable combination of Incorrect and Correct options. (More on that below.) It is okay to have as few as 3, especially if the number of the options in the option pool is small.
Q: How many correct and incorrect options should be included in the Presentation Set?
A: These numbers should be more or less proportional to the number of correct and incorrect options in the Option Pool. For example, if an Option Pool has 10 options, 7 of which are incorrect, then a writer could reasonably determine to have 4 options in the Option Pool, 3 of which are Incorrect and 1 which is correct. If the Option Pool had 8 correct and 2 incorrect, then the opposite configuration for the Presentation Set could be selected. For items that are fairly balanced in terms of correct and incorrect options in the Option Pool, the Presentation Set could have approximately equal number of correct and incorrect options.
Figure 4: Item Scoring Settings and Presentation Set
For this configuration set, the DOMC Algorithm will select 2 correct answers to present, and will potentially present all possible incorrect options (pending scoring rules completion).
To earn a score of 1 on this item, a candidate must respond “yes” to both correct response options presented.
To earn a score of 0 on this item, a candidate must respond “no” to one correct response option, OR “yes” to one incorrect response option.
There is a 50% chance that an additional random option will be presented after the item is scored.