Position Title : ETL Developer
Job Description : Perform business analysis and requirement gathering. Prepare technical specifications for the development of DataStage mappings to load data into various tables in data marts.Implement change management process while fixing the existing code to enhance the code or bug fixing. Design and develop complex mappings to extract, transform, and load (ETL) the data from relational databases, flat files, excel, etc. to staging, data warehouse and DataMart’s.
Extract data from various sources to targets like flat files, oracle, Teradata; transform and load the data into target database using IBM infosphere DataStage.
Validate data using SQL tools like toad, Teradata SQL assistant.
Use aggregator, join, lookup copy, transformer, sequencer job activities, etc. to stage complex ETLs.
Arrange tasks in order and satisfy business criteria using execute command, nested command, sequencer activity, etc. Test mapping logic while performing unit tests and integrated testing; validate results with business analyst and/or customer. Automate renaming flat files with a timestamp extension, compressing, archiving, creating workflows, manipulating files, notifying data owners via automatic emails (success/failure), and uploading/downloading files to desired locations using Unix shell scripts.
Schedule DataStage jobs, shell script jobs with AutoSys /control m scheduling tools.
Schedule Informatica, SQL script and shell script jobs with the AutoSys scheduling tool.
Create and restore repository services and repository contents, respectively, using admin console.
Write test cases with expected results to help the testing team validate the mapping in the test environment.
Migrate ETL jobs from development to test and production environments.
Complete end-to-end testing for quality assurance of this application.
Reconcile source to target data generated to test the data quality.
Execute change requests, manage incidents, analyze and coordinate resolution of program flaws for the development. environment; hot fix them in the QA, pre-prod and prod environments during the runs.
Identify errors in existing mappings by analyzing the data flow, evaluating transformations and fixing the bugs so that the mappings conform to business requirements. The supervisor for this position is Project Lead.
Job Requirements:Master’s degree or its foreign equivalent in computer science, information science, computer information systems, IT, computer engineering or electronics engineering. 12 months of experience in the job offered, as Data Specialist, Database Developer, or Tech Specialist.
In lieu of Master’s +1-year experience, the employer will accept Bachelor’s degree with five (5) years of post- bachelor progressive experience in the job offered or in a closely related field.
Past experience in related field must include 1 year of experience with IBM InfoSphere DataStage, PL/SQL, Unix Shell scripting, and Teradata. Any suitable combination of education, training, or experience is acceptable.
Work Location:Sterling, VA & other unanticipated client locations in the US. May require relocations.
Work Schedule:40 hours per week
Hours:9 AM to 6 PM
Location:Sterling, VA and other unanticipated locations throughout the US. Candidate may be required to relocate to client locations for projects.Any interested applicant may apply to the following location for consideration:Contact: Send resume to HR, Nivid Infotech Inc., 21525 Ridgetop Circle, Suite 280, Sterling, VA 20166.