A simple Wordle suggestion tool (and clone) based on the British National Corpus. Each suggestion is given two scores: a simple word score (ws) based on commonness and a letter score (ls) calculated by first creating a frequency table of available letters at each position and then summing individual letter weights for each possible word. Suggestions are ranked by letter score, so the top guess is likely to provide letters, if not the solution.
To use, first make a guess on Wordle, then enter the same guess on \vat\vrd and for each matching letter on Wordle, click once for yellow or twice for green. Finally, click "check it" for a list of suggestions.
Note: Use of BNC instead of a Wordle-specific dictionary means not all suggestions will be possible words and not all solutions can be suggested. This is an intentional weakness, because otherwise it would feel too much like cheating.A platform for collecting and analyzing Twitter (not X) discourse. This was my dissertation project, designed to simplify research of language use on Twitter. Twig enabled researchers to collect tweets into datasets based on a variety of parameters: key words or phrases, posting date and time ranges, or full extraction of specific accounts.
The analysis environment allowed datasets to be explored and analyzed based on a combination of content and metadata filters. For example, one could identify the most frequent word following a search term on a specific date — and then read through every matching tweet to see how the word is being used.
Twig is no longer operational after API changes made when Twitter was rebranded as X. What's linked here is an overview of what twig briefly was.
A platform for genre-based writing instruction. I have been working on Dissemity since 2017, first as a PhD student, then as a Postdoc. The home page offers a good overview of what Dissemity does, so I won't repeat it here. I am the technical lead and sole programmer, but Dissemity is ultimately the vision of OSU's Dr. Steph Link and has benefitted enormously from the work of a team of undergraduate and graduate research assistants.
Link, S., Redmon, R., & Hagan, M. (2025). Genre-based fine-tuning of large language models with self-organizing maps for automated writing evaluation. Research Methods in Applied Linguistics. DOI: https://doi.org/10.1016/j.rmal.2025.100219 Link, S., Redmon, R., Shamsi, Y., & Hagan, M. (2024). Generating Genre-Based Automatic Feedback on English for Research Publication Purposes. CALICO Journal. DOI: https://doi.org/10.1558/cj.26273