Time to Read: 10 MinutesComments
In the beginning of the new system’s development, two points became painfully obvious.
I covered my experience with automated testing in a separate post. (See Here) In this post, I talk about one of the more interesting problems that I would need to resolve with the new system. For starters, I recognized that I would need to create some dummy data within the new system to test its functionality completely, but I also knew that once the new system was to be deployed, the information in the old system would need to be migrated over to the new system.
I could begin working on the facilities responsible for this migration and it would also provide the dummy data that could be used to experiment and work with in the new system. Similar to my initial survey of all the functional requirements of the old system to determine the functionality of the new system, in this case, I would need to review all of the data in the database of the old system and determine how to transfer and restructure it to fit into the database structure of the new system.
The steps would also need to be easily repeatable so that I could wipe the data in the new system after the testing and development phase and perform the migration shortly before the initial deployment of the new system.
This process began with first collecting a list of the tables in the database of the old system and dismissing any/all database tables that were not actively used in the old system in any capacity (oftentimes these tables were the result of spreadsheet imports and had no further use shortly after their import).
Next, I would break out the stages of this migration based on the family of modules that the migrated data of that stage would impact and order these stages in the manner that each stage was dependent on the data migrated from the previous stage. This order and other details were collected in a text manual to aid in the repeat-ability of the steps of the migration process.
This step didn’t involve migrating over existing data from the old system’s database, more-so it involved building up the organization structure of the institution in the new system. This created data would be referenced and used further in the migration process hence why this step would need to be the first.
This script was responsible for the following:
Most of this information was hard-coded in the source of the old system.
Our first step where we actually migrate over information from the old system’s database to this system’s database. This step was further broken down into three scripts responsible for migrating over the following the school information, institute information, and student test information respectively.
Migrating the school information involved migrating the data from two tables:
Into four tables in the new system’s database:
The query responsible for pulling the school data would need to create a consistency between absentee values which sometimes held different values (actual empty string, 0’s, or null values) and a function that was responsible for formatting the phone numbers correctly.
Sanitization, consistency, formatting, and white-listing would be a concept revisited several times in this migration process.
Migrating the institute information was a much simpler process and only involved one table in the old system:
This information was migrated to two tables in the new system:
Migrating the student tests’ structures were a bit more difficult. First, the structure for the ACT, SAT, PRAXIS, and other tests whose information was stored directly on the student’s information in the old system’s database was created ad-hoc first.
The remaining student tests all came from one table:
But there were problems with the above data:
This data was migrated over to the new system into two tables:
When the scripts were executed, the school, institute, and student test information of the old system was effectively cleaned and restructured in the new system.
This step would be responsible for migrating over the bulk on student information contained within six tables:
Four tables for storing the student’s demographic and general information.
This information would be migrated over to eight tables:
There were quite a few obstacles that needed to be overcome to promote a seamless migration of clean and accurate information:
There were many cases where a field’s data would need to match the options available in the new system
Many of the above obstacles would be encountered in future data migration of other user entity information.
Once the script responsible for executing this stage in the migration was completed, the new system would have all non-redundant student information in a clean and consistent manner.
The process was very similar to the above but concerned only faculty information.
This faculty information would come from four different tables:
This information would be migrated to four different tables:
Many of the obstacles above were present here as well. But additionally:
This step would be responsible for migrating data from two tables:
This data would be migrated over to twelve tables:
This process was handled similar to the previous user entity type migrations with the additional details of the clinical instructor’s site information. However, clinical instructors had licenses, certifications, and organizations that would need to be created first ad-hoc in their own tables before being added to the clinical instructor user’s information.
This was straightforward for licenses and certification, however a clinical instructor could belong to multiple organizations which would be represented as a string in their field of form: “PBS, CBS, ABC, FOX”.
Thus after every possible organization was created, the clinical instructor’s organization field would need to check for matches between the different organizations’ abbreviations and the contents of the field itself before creating the relationship between organizations and clinical instructor users in the new system.
Once again, after this script was executed. Non-redundant, consistent, formatted, and sanitized clinical instructor information now populated the new system.
The last and definitely least of the migration steps that ensured that all principal had their own corresponding user information in the system. Since they were not students, faculty, or clinical instructor, this process involved simply migrating any principal information from the old system’s school information table into the base user demographic table in the new system to create a new user entity (this is if the principal wasn’t a student or any other user type which would mean that his/her information had already been migrated over).
After the completion of the previous six steps, all organization and content-related information had been cleaned and restructured into the database system of the new system. This would be the first 6 of 17 steps to complete the entire data creation and migration process.
This post is about the project, DREAM
An online electronic assessment system for the purpose of collecting assessment data regarding student teachers and counselors throughout their collegiate career. The system also electronically facilitates a large number of other institutional processes in an effective manner.
Software Developer Career Tips: Closing Thoughts
12 February, 2020
Fitness Series: Illnesses and Injuries, Make a Contingency Plan
03 February, 2020
Software Developer Career Tips: The System Design Phase
27 January, 2020
Software Developer Career Tips: The Technical Phase
21 January, 2020
Software Developer Career Tips: Do Your Preliminary Research
13 January, 2020
The Power of Habit
19 December, 2019
Rich Dad Poor Dad
14 October, 2019
19 August, 2019