You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Automated scoring of written and spoken test responses is a growing field in educational natural language processing. Automated scoring engines employ machine learning models to predict scores for such responses based on features extracted from the text/audio of these responses. Examples of automated scoring engines include `Project Essay Grade <http://pegwriting.com/about>`_ for written responses and `SpeechRater <https://www.ets.org/research/topics/as_nlp/speech/>`_ for spoken responses.
42
+
Automated scoring of written and spoken test responses is a growing field in educational natural language processing. Automated scoring engines employ machine learning models to predict scores for such responses based on features extracted from the text/audio of these responses. Examples of automated scoring engines include `Project Essay Grade <http://pegwriting.com/about>`_ for written responses and `SpeechRater <https://www.ets.org/research/policy_research_reports/publications/report/2008/hukv>`_ for spoken responses.
43
43
44
44
Rater Scoring Modeling Tool (RSMTool) is a python package which automates and combines in a single pipeline multiple analyses that are commonly conducted when building and evaluating such scoring models. The output of RSMTool is a comprehensive, customizable HTML statistical report that contains the output of these multiple analyses. While RSMTool does make it really simple to run a set of standard analyses using a single command, it is also fully customizable and allows users to easily exclude unneeded analyses, modify the default analyses, and even include custom analyses in the report.
45
45
46
-
We expect the primary users of RSMTool to be researchers working on developing new automated scoring engines or on improving existing ones. Note that RSMTool is not a scoring engine by itself but rather a tool for building and evaluating machine learning models that may be used in such engines.
46
+
We expect the primary users of RSMTool to be researchers working on developing new automated scoring engines or on improving existing ones. Note that RSMTool is not a scoring engine by itself but rather a tool for building and evaluating machine learning models that may be used in such engines.
47
47
48
48
RSMTool is driven by a configuration file that users have to supply. Given the large number of available options, this can get complicated especially for new users. That's why RSMTool can help users generate configuration files interactively via guided prompts! The video below demonstrates this feature.
49
49
@@ -59,7 +59,7 @@ To get started with RSMTool, please see the extensive `official documentation <h
Copy file name to clipboardExpand all lines: doc/automated_configuration.rst
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
Auto-generating configuration files
4
4
-----------------------------------
5
-
Configuration files for :ref:`rsmtool <config_file_rsmtool>`, :ref:`rsmeval <config_file_rsmeval>`, :ref:`rsmcompare <config_file_rsmcompare>`, :ref:`rsmpredict <config_file_rsmpredict>`, :ref:`rsmsummarize <config_file_rsmsummarize>`, :ref:`rsmxval <config_file_rsmxval>` can be difficult to create manually due to the large number of configuration options supported by these tools. To make this easier for users, all of these tools support *automatic* creation of configuration files, both interactively and non-interactively.
5
+
Configuration files for :ref:`rsmtool <config_file_rsmtool>`, :ref:`rsmeval <config_file_rsmeval>`, :ref:`rsmcompare <config_file_rsmcompare>`, :ref:`rsmpredict <config_file_rsmpredict>`, :ref:`rsmsummarize <config_file_rsmsummarize>`, and :ref:`rsmxval <config_file_rsmxval>` can be difficult to create manually due to the large number of configuration options supported by these tools. To make this easier for users, all of these tools support *automatic* creation of configuration files, both interactively and non-interactively.
6
6
7
7
Interactive generation
8
8
~~~~~~~~~~~~~~~~~~~~~~
@@ -16,7 +16,7 @@ For example, to generate an ``rsmtool`` configuration file interactively, run th
16
16
The following screencast shows an example interactive session after the above command is run (click to play):
If you want to include subgroup information in the reports for ``rsmtool``, ``rsmeval``, ``rsmcompare``, and ``rsmxval``, you should add ``--subgroups`` to the command. For example, when you run ``rsmeval generate --interactive --subgroups`` you would be prompted to enter the subgroup column names and the ``general_sections`` list (if shown [#f1]_) will also include subgroup-based sections. Since the ``subgroups`` option can accept multiple inputs, it is handled in the same way as the ``experiment_dirs`` option for ``rsmsummarize`` above.
50
50
@@ -58,7 +58,7 @@ We end with a list of important things to note about interactive generation:
58
58
59
59
- Required fields will *not* accept a blank input (just pressing enter) and will show an error message in the bottom left until a valid input is provided.
60
60
61
-
- Optional fields will accept blank inputs since they have default values that will be used if no user input is provided. In some cases, default values are shown underlined in parentheses.
61
+
- Optional fields will accept blank inputs since they have default values that will be used if no user input is provided. In some cases, default values are shown underlined in parentheses.
62
62
63
63
- You can also use ``-i`` as an alias for ``--interactive`` and ``-g`` as an alias for ``--subgroups``. So, for example, if you want to interactively generate a configuration file with subgroups for ``rsmtool``, just run ``rsmtool generate -ig`` instead of ``rsmtool generate --interactive --subgroups``.
Copy file name to clipboardExpand all lines: doc/getting_started.rst
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -2,18 +2,18 @@
2
2
3
3
Installation
4
4
============
5
-
Note that RSMTool currently works with Python 3.7, 3.8, and 3.9.
5
+
Note that RSMTool currently works with Python 3.8, 3.9, and 3.10.
6
6
7
7
Installing with conda
8
8
----------------------
9
9
10
10
Currently, the recommended way to install RSMTool is by using the ``conda`` package manager. If you have already installed ``conda``, you can skip straight to Step 2.
11
11
12
-
1. To install ``conda``, follow the instructions on `this page <https://conda.io/projects/conda/en/latest/user-guide/install/index.html>`_.
12
+
1. To install ``conda``, follow the instructions on `this page <https://conda.io/projects/conda/en/latest/user-guide/install/index.html>`_.
13
13
14
-
2. Create a new conda environment (say, ``rsmtool``) and install the RSMTool conda package for your preferred Python version. For example, for Python 3.7, run::
14
+
2. Create a new conda environment (say, ``rsmtool``) and install the RSMTool conda package for your preferred Python version. For example, for Python 3.8, run::
3. Activate this conda environment by running ``conda activate rsmtool``. You should now have all of the RSMTool command-line utilities in your path. [#]_
Automated scoring of written and spoken responses is a growing field in educational natural language processing. Automated scoring engines employ machine learning models to predict scores for such responses based on features extracted from the text/audio of these responses. Examples of automated scoring engines include `MI Write <https://measurementinc.com/miwrite>`_ for written responses and `SpeechRater <https://www.ets.org/research/topics/as_nlp/speech/>`_ for spoken responses.
18
+
Automated scoring of written and spoken responses is a growing field in educational natural language processing. Automated scoring engines employ machine learning models to predict scores for such responses based on features extracted from the text/audio of these responses. Examples of automated scoring engines include `MI Write <https://measurementinc.com/miwrite>`_ for written responses and `SpeechRater <https://www.ets.org/research/policy_research_reports/publications/report/2008/hukv>`_ for spoken responses.
19
19
20
20
RSMTool is a python package which automates and combines in a *single* :doc:`pipeline <pipeline>` multiple analyses that are commonly conducted when building and evaluating automated scoring models. The output of RSMTool is a comprehensive, customizable HTML statistical report that contains the outputs of these multiple analyses. While RSMTool does make it really simple to run this set of standard analyses using a single command, it is also fully customizable and allows users to easily exclude unneeded analyses, modify the standard analyses, and even include custom analyses in the report.
21
21
@@ -27,7 +27,7 @@ The primary means of using RSMTool is via the :doc:`command-line <usage_rsmtool>
27
27
Documentation
28
28
=============
29
29
30
-
.. note::
30
+
.. note::
31
31
32
32
If you use the `Dash <https://kapeli.com/dash>`_ app on macOS, you can also download the complete RSMTool documentation for offline use. Go to the Dash preferences, click on "Downloads", then "User Contributed", and search for "RSMTool".
Copy file name to clipboardExpand all lines: doc/internal/release_process.rst
+9-11Lines changed: 9 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -9,14 +9,14 @@ This process is only meant for the project administrators, not users and develop
9
9
10
10
#. Run the ``tests/update_files.py`` script with the appropriate arguments to make sure that all test data in the new release have correct experiment ids and filenames. If any (non-model) files need to be changed this should be investigated before the branch is released. Please see more details about running this `here <https://rsmtool.readthedocs.io/en/stable/contributing.html#writing-new-functional-tests>`__.
11
11
12
-
.. note::
12
+
.. note::
13
13
14
14
Several files have been excluded from the repository due to their non-deterministic nature so please do not add them back to the repository. The following files are currently excluded:
15
15
16
16
* Fairness test files for `lr-eval-system-score-constant` test
17
-
* Predictions and all evaluation files for `linearsvr` test.
18
-
19
-
Note that the full set of outputs from these test files are also used as input for `rsmcompare` and `rsmsummarize` tests. These *input* files need to be updated following the process under **Example 2** in `Writing new functional tests <https://rsmtool.readthedocs.io/en/stable/contributing.html#writing-new-functional-tests>`_. You can also see `this pull request <https://github.com/EducationalTestingService/rsmtool/pull/525>`_ for more information.
17
+
* Predictions and all evaluation files for `linearsvr` test.
18
+
19
+
Note that the full set of outputs from these test files are also used as input for `rsmcompare` and `rsmsummarize` tests. These *input* files need to be updated following the process under **Example 2** in `Writing new functional tests <https://rsmtool.readthedocs.io/en/stable/contributing.html#writing-new-functional-tests>`_. You can also see `this pull request <https://github.com/EducationalTestingService/rsmtool/pull/525>`_ for more information.
20
20
21
21
#. Create a release branch ``release/XX`` on GitHub.
22
22
@@ -34,23 +34,21 @@ This process is only meant for the project administrators, not users and develop
34
34
35
35
#. Build the PyPI source and wheel distributions using ``python setup.py sdist build`` and ``python setup.py bdist_wheel build`` respectively.
36
36
37
-
#. Upload the source and wheel distributions to TestPyPI using ``twine upload --repository testpypi dist/*``. You will need to have the ``twine`` package installed and set up your ``$HOME/.pypirc`` correctly. See details `here <https://packaging.python.org/guides/using-testpypi/>`__. You will need to have the appropriate permissions for the ``ets`` organization
37
+
#. Upload the source and wheel distributions to TestPyPI using ``twine upload --repository testpypi dist/*``. You will need to have the ``twine`` package installed and set up your ``$HOME/.pypirc`` correctly. See details `here <https://packaging.python.org/guides/using-testpypi/>`__. You will need to have the appropriate permissions for the ``ets`` organization on TestPyPI.
#. Then run some tests from a RSMTool working copy. If the TestPyPI package works, then move on to the next step. If it doesn't, figure out why and rebuild and re-upload the package.
44
44
45
-
#. Build the new conda package by running the following command in the ``conda-recipe`` directory (note that this assumes that you have cloned RSMTool in a directory named ``rsmtool``)::
45
+
#. Build the new conda package by running the following command in the ``conda-recipe`` directory (note that this assumes that you have cloned RSMTool in a directory named ``rsmtool``). Note that you may need to comment out lines in your `$HOME/.condarc` file if you are using ETS Artifactory and you get conflicts::
46
46
47
47
conda build -c conda-forge -c ets .
48
48
49
-
#. This will create python 3.7, 3.8, and 3.9 packages for your native platform, e.g., ``osx-64``.
50
-
51
-
#. Convert these built packages for the other two platforms. For example, if you ran the above command on macOS, run ``conda convert -p linux-64 -p win-64 <packages files>``, where `<packages_files>` are the package files that were created in step 10.
49
+
#. This will create a noarch package with the path to the package printed out to the screen.
52
50
53
-
#. Upload all 9 package files (3 Python versions x 3 platforms) to anaconda.org using ``anaconda upload --user ets <path_to_files>``. You will need to have the appropriate permissions for the ``ets`` organization.
51
+
#. Upload the package file to anaconda.org using ``anaconda upload --user ets <path_to_file>``. You will need to have the appropriate permissions for the ``ets`` organization.
54
52
55
53
#. Create pull requests on the `rsmtool-conda-tester <https://github.com/EducationalTestingService/rsmtool-conda-tester/>`_ and `rsmtool-pip-tester <https://github.com/EducationalTestingService/rsmtool-pip-tester/>`_ repositories to test the conda and TestPyPI packages on Linux and Windows.
56
54
@@ -60,7 +58,7 @@ This process is only meant for the project administrators, not users and develop
60
58
61
59
#. Once the build for the PR passes and the reviewers approve, merge the release branch into ``main``.
62
60
63
-
#. Upload source and wheel packages to PyPI using ``python setup.py sdist upload`` and ``python setup.py bdist_wheel upload``
61
+
#. Upload the already-built source and wheel packages to PyPI using ``twine upload dist/*``. You will need to have the ``twine`` package installed and set up your ``$HOME/.pypirc`` correctly. You will need to have the appropriate permissions for the ``ets`` organization on PyPI.
64
62
65
63
#. Make sure that the ReadTheDocs build for ``main`` passes by examining the badge at this `URL <https://img.shields.io/readthedocs/rsmtool/main.svg>`__ - this should say "passing" in green.
0 commit comments