Posted:3 days ago|
Platform:
Work from Office
Full Time
* Design, develop, and deploy ETL processes using Ab Initio Graphical Development Environment (GDE).* Build high-performance data integration and transformation pipelines for large-scale data processing.* Work extensively with Ab Initio Co-Operating System, Enterprise Meta Environment (EME), and metadata-driven development frameworks.* Develop and optimize graphs for both batch and real-time data processing requirements.* Integrate with relational databases such as Oracle, SQL Server, Teradata, and DB2, as well as various external data sources.* Implement continuous flows, web services, and message-based integration using Ab Initio.* Utilize Ab Initio components including Continuous Flows (Co-Operating System and GDE), Plans, Parameter Sets (Psets), Conduct-It for job scheduling and orchestration, and Graphs with Parameter Sets.* Implement robust data quality, validation, profiling, and error handling frameworks to ensure data accuracy and reliability.* Collaborate closely with business analysts, architects, and QA teams to design and deliver scalable and efficient ETL solutions.* Participate actively in Agile ceremonies including sprint planning, retrospectives, and daily standups, contributing to DevOps pipelines for continuous integration and delivery.* Support production environments by monitoring jobs, troubleshooting failures, and performing root cause analysis for incidents.* Demonstrate solid expertise in Ab Initio GDE, Co-Operating System, Enterprise Meta Environment (EME), Conduct-It, Parameter Definition Language (PDL), and Data Quality Environment (DQE) along with Metadata Hub (preferred).* Apply strong knowledge of ETL concepts, data integration, data warehousing, and performance tuning techniques.* Work on both batch and real-time data processing architectures.* Develop data quality frameworks and apply metadata-driven development practices for better maintainability and consistency.* Demonstrate SQL expertise across databases including Oracle, Teradata, DB2, SQL Server, and PostgreSQL.* Perform database performance tuning, indexing, and query optimization to enhance data processing efficiency.* Write and maintain UNIX/Linux shell scripts, Python, or Perl scripts for automation and operational efficiency.* Possess familiarity with Java or other programming languages as an added advantage.* Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or Azure DevOps.* Use source control systems like Git and SVN for version management and code collaboration.* Work within Agile methodologies including Scrum and Kanban, using project management tools such as Jira or Rally.* Gain exposure to cloud platforms such as AWS, Azure, or GCP for building and deploying cloud-based data solutions.* Work with big data ecosystems including Hadoop, Spark, Hive, and Kafka as an added advantage.* Apply knowledge of containerization technologies such as Docker and Kubernetes for environment consistency and scalability.* Manage job monitoring and scheduling using tools like Control-M, Autosys, or equivalent schedulers.* Ensure adherence to security standards, encryption protocols, and access management policies for data and system protection."
Mobilution It Systems
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
9.6925 - 10.0 Lacs P.A.
gurugram, haryana, india
Experience: Not specified
Salary: Not disclosed
hyderabad
14.0 - 24.0 Lacs P.A.
gurugram, haryana, india
Salary: Not disclosed
hyderabad, all india
Salary: Not disclosed
mumbai, maharashtra, india
Salary: Not disclosed
all india, gurugram
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
all india, gurugram
Salary: Not disclosed
coimbatore, all india
Salary: Not disclosed