Jobs
Interviews

340 Etl Development Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

1 - 3 Lacs

Kolkata, Chennai, Bengaluru

Hybrid

Location- Pune, Mumbai, Nagpur, Goa, Noida, Gurgaon, Ahmedabad, Jaipur, Indore, Kolkata, Kochi, Hyderabad, Bangalore, Chennai,) Experience: 5-7 years Notice: 0-15 days Open position: 6 JD: Proven experience with DataStage for ETL development. Strong understanding of data warehousing concepts and best practices. Hands-on experience with Apache Airflow for workflow management. Proficiency in SQL and Python for data manipulation and scripting. Solid knowledge of Unix/Linux shell scripting. Experience with Apache Spark and Databricks for big data processing. Expertise in Snowflake for cloud data warehousing. Familiarity with version control systems (e.g., Git) and CI/CD pipelines. Excellent problem-solving and communication skills.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

We are seeking an experienced Lead Database Engineer to take charge of the design, development, and optimization of our extensive data solutions. Your role will involve a profound understanding of Oracle SQL, PL SQL, Data Modeling, Python, and effective leadership skills. You will play a crucial role in constructing and managing robust ETL pipelines, ensuring data integrity and performance, and guiding a team towards implementing best practices and continuous improvement. Your responsibilities will include designing database objects like tables and views, developing and optimizing high-performance ETL pipelines using Oracle SQL and PL/SQL, providing technical leadership by mentoring data engineers, collaborating with stakeholders to translate data needs into scalable solutions, ensuring data integrity and security, and staying updated on industry trends for continuous improvement of data infrastructure and processes. To excel in this role, you should have at least 8 years of hands-on experience in data engineering, expertise in Oracle SQL with advanced optimization skills, understanding of data warehousing concepts, strong programming skills in PL/SQL, leadership qualities to manage projects and foster a positive work environment, knowledge of data warehousing principles and cloud platforms, excellent problem-solving abilities, and familiarity with CI/CD pipelines and DevOps practices. Bonus points for experience in other programming languages like Python/Java and knowledge of real-time data processing frameworks such as Kafka. A Bachelor's degree or equivalent experience is required for this role. This job description serves as an overview of the duties performed, and additional job-related tasks may be assigned as necessary.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

3 - 7 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 08 The RoleAssociate I, Software Engineer The Team We are looking for highly motivated, enthusiastic, and skilled software engineer to join an agile scrum team developing technology solutions for S&P Global Market Intelligence. The team is responsible for modernizing and migrating the internal and product platform utilizing latest technologies. The Impact As Associate I Software Engineer, you will be part of content systems development team that manages multi-terabyte data using Microsoft. Net tech stack and big data technologies. You will be part of a heavy data intensive environment. This role expects a candidate with deep Python, SQL Server and ETL development experience. Whats in it for you Its a fast-paced agile environment that deals with huge volumes of data, so youll have an opportunity to sharpen your data skills and work on emerging technology stack. Responsibilities: Design and implement software components for content systems. Perform analysis and articulate solutions. Design underlying engineering for use in multiple product offerings supporting a large volume of end-users. Develop project plans with task breakdowns and estimates. Continuously learn and translate those learnings into improvements to solutions Solve a variety of complex problems and figure out possible solutions, weighing the costs and benefits. What Were Looking For: Basic Qualifications: Bachelor's degree in computer science/engineering or equivalent 3+ years of relevant experience on SQL Server and Python Minimum 2 years in AWS Cloud. Proficient in at least one CI/CD tool like Github. Experience building AWS Lambdas and AWS Step Functions using Terraform. Strong Python and SQL skills Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Good experience with developing solutions involving relational database technologies on SQL Server platform, stored procedure programming experience using Transact SQL. Passionate, smart, and articulate developer Able to work well individually and with a team Strong problem-solving skills Good work ethic, self-starter, and results-oriented Preferred Qualifications: Familiarity and/or enthusiasm with Data Science / Machine Learning is a plus Experience working in cloud computing environments such as AWS. Experience with Dynamo DB, or Postgres Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH203 - Entry Professional (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Hyderabad

Work from Office

About the Role: Grade Level (for internal use): 10 Title Senior ETL and Backend Developer (Salesforce) Job Location Hyderabad, Ahmedabad, Gurgaon, Virtual-India The Team: We are seeking a skilled Senior ETL and Backend Developer with extensive experience in Informatica and Salesforce. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes and backend systems to ensure seamless data integration and management. The team works in a challenging environment that gives ample opportunities to use innovative ideas to solve complex problems. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: You will be making significant contribution in building solutions for the Web applications using new front-end technologies & Micro services. The work you do will deliver products to build solutions for S&P Global Commodity Insights customers. Responsibilities ETL Development: Design, develop, and maintain ETL processes using Informatica PowerCenter and other ETL tools. Data Integration: Integrate data from various sources, including databases, APIs, flat files, and cloud storage, into data warehouses or data lakes. Backend Development: Develop and maintain backend systems using relevant programming languages and frameworks. Salesforce Integration: Implement and manage data integration between Salesforce and other systems. Performance Tuning: Optimize ETL processes and backend systems for speed and efficiency. Data Quality: Ensure data quality and integrity through rigorous testing and validation.Monitoring and MaintenanceContinuously monitor ETL processes and backend systems for errors or performance issues and make necessary adjustments. Collaboration: Work closely with data architects, data analysts, and business stakeholders to understand data requirements and deliver solutions.Qualifications: Basic Qualifications: Bachelor's /Masters Degree in Computer Science, Information Systems or equivalent. A minimum of 8+ years of experience in software engineering & Architecture. A minimum 5+ years of experience in ETL development, backend development, and data integration. A minimum of 3+ years of Salesforce development, administration/Integration. Proficiency in Informatica PowerCenter and other ETL tools. Strong knowledge of SQL and database management systems (e.g., Oracle, SQL Server). Experience with Salesforce integration and administration. Proficiency in backend development languages (e.g., Java, Python, C#). Familiarity with cloud platforms (e.g., AWS, Azure) is a plus. Excellent problem-solving skills and attention to detail. Ability to work independently and as part of a team. Nice to have GenAI, Java, Spring boot, Knockout JS, requireJS, Node.js, Lodash, Typescript, VSTest/ MSTest/ nUnit. Preferred Qualifications: Proficient with software development lifecycle (SDLC) methodologies like SAFe, Agile, Test- driven development. Experience with other ETL tools and data integration platforms. Informatica Certified ProfessionalSalesforce Certified Administrator or Developer Knowledge of back-end technologies such as C#/.NET, Java or Python. Excellent problem solving, analytical and technical troubleshooting skills. Able to work well individually and with a team. Good work ethic, self-starter, and results oriented. Excellent communication skills are essential, with strong verbal and writing proficiencies. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. Were a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSESPGI). S&P Global is the worlds foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the worlds leading organizations navigate the economic landscape so they can plan for tomorrow, today.For more information, visit http://www.spglobal.com/commodity-insights . Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- , SWP Priority Ratings - (Strategic Workforce Planning)

Posted 2 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Bachelors degree or military experience in related field preferably computer science and 7 years of experience in ETL development within a Data Warehouse Deep understanding of enterprise data warehousing best practices and standards Strong experience in software engineering comprising of designing, developing and operating robust and highly scalable cloud infrastructure services Strong experience with Python/PySpark, DataStage ETL and SQL development Proven experience in cloud infrastructure projects with hands on migration expertise on public clouds such as AWS and Azure, preferably Snowflake Knowledge of Cybersecurity organization practices, operations, risk management processes, principles, architectural requirements, engineering and threats and vulnerabilities, including incident response methodologies Understand Authentication & Authorization Services, Identity & Access Management Strong communication and interpersonal skills

Posted 2 weeks ago

Apply

7.0 - 12.0 years

20 - 32 Lacs

Bengaluru

Work from Office

Dear Candidate, We are Hiring for Top MNC!!! Role: DB ETL Developer Location: Bangalore Contract: 12 Months Experience: 7-15 years Notice: Immediate-30days only Position Description : A bachelors degree in Computer Science or a related field. 7-9 years of experience working as a hands-on developer in Informatica, ETL technologies. Worked extensively on data integration, designing, and developing reusable interfaces/Advanced experience in Python, DB2, Postgres, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling Hands-on experience with cloud-based technology Expert level understanding of data warehouse, core database concepts and relational database design Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset Proven track record of delivering results Proven analytical skills and experience making decisions based on hard and soft data A desire and openness to learning and continuous improvement, both of yourself and your team members. Hands-on experience on development of APIs, Informatica is a plus Experience with Business Intelligence tools is a plus. If interested, Pls share your updated CV to arthie.m@orcapod.work

Posted 2 weeks ago

Apply

4.0 - 9.0 years

13 - 23 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Role & responsibilities Preferred candidate profile Experience: A minimum of 4-10 years of experience into data integration/orchestration services, service architecture and providing data driven solutions for client requirements Experience on Microsoft Azure cloud and Snowflake SQL, database query/performance tuning. Experience with Qlik Replicate and Compose tools(Change Data Capture) tools is considered a plus Strong Data warehousing Concepts, ETL tools such as Talend Cloud Data Integration tool is must Exposure to the financial domain knowledge is considered a plus. Cloud Managed Services such as source control code Github, MS Azure/Devops is considered a plus. Prior experience with State Street and Charles River Development ( CRD) considered a plus. Experience in tools such as Visio, PowerPoint, Excel. Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus. Strong SQL knowledge and debugging skills is a must. Responsibilities: As a Data Integration Developer/Sr Developer, be hands-on ETL/ELT data pipelines, Snowflake DWH, CI/CD deployment Pipelines and data-readiness(data quality) design, development, implementation and address code or data issues. Experience in designing and implementing modern data pipelines for a variety of data sets which includes internal/external data sources, complex relationships, various data formats and high-volume. Experience and understanding of ETL Job performance techniques, Exception handling, Query performance tuning/optimizations and data loads meeting the runtime/schedule time SLAs both batch and real-time data uses cases. Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions. Demonstrate strong collaborative experience across regions (APAC,EMEA and NA) to come up with design standards, High level design solutions document, cross training and resource onboarding activities. Good understanding of SDLC process, Governance clearance, Peer Code reviews, Unit Test Results, Code deployments, Code Security Scanning, Confluence Jira/Kanban stories. Strong attention to detail during root cause analysis, SQL query debugging and defect issue resolution by working with multiple business/IT stakeholders.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

10 - 20 Lacs

Bengaluru

Work from Office

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Required Skills & Qualifications: Minimum 6-8 years of hands-on experience in ETL development, data warehousing, and business intelligence. Extensive hands-on experience with Informatica PowerCenter (versions 9.x/10.x) including Designer, Workflow Manager, Workflow Monitor. Extensive expertise in Oracle SQL and PL/SQL including advanced concepts (e.g., analytical functions, complex joins, dynamic SQL, exception handling). Proven experience with Oracle Database 11g, 12c, 19c or higher. Strong understanding of ETL methodologies, data warehousing principles, and dimensional modeling (Star Schema, Snowflake Schema). Experience with performance tuning of Informatica PowerCenter mappings/workflows and large-scale Oracle databases. Proficiency in shell scripting (e.g., Bash, Korn Shell) for automation of ETL jobs and pre/post-session commands. Familiarity with version control systems (e.g., Git). Excellent problem-solving, analytical, and debugging skills. Strong communication (verbal and written) and interpersonal skills. Ability to work independently and as part of a collaborative team in a fast-paced environment. Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. Preferred Skills: Experience with Oracle-plsql ETL tools (e.g., Informatica). Familiarity with Agile development methodologies. Proficiency in Git for source code control, including code migration and deployment workflows. Demonstrated ability to write and optimize complex SQL queries for large datasets. Design, develop, test, and maintain robust and highly efficient Oracle PL/SQL stored procedures, functions, packages, and database triggers. Implement complex business logic, data transformations, and automated tasks using PL/SQL to support ETL processes and application requirements. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You will be working as a Databricks Developer with 3-6 years of experience, located in India. Joining the data engineering and AI innovation team, your main responsibilities will include developing scalable data pipelines using Databricks and Apache Spark, implementing AI/ML workflows with tools like MLflow and AutoML, collaborating with data scientists to deploy models into production, performing ETL development, data transformation, and model training pipelines, managing Delta Lake architecture, and working closely with cross-functional teams to ensure data quality and governance.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a seasoned Lead Database Engineer, you will be responsible for spearheading the design, development, and optimization of large-scale data solutions. Your role will involve utilizing your deep understanding of Oracle SQL, PL SQL, Data Modeling, Python, and proven leadership capabilities to build and maintain robust ETL pipelines. Ensuring data integrity and performance, as well as guiding a team towards best practices and continuous improvement, will be crucial aspects of your responsibilities. Your key responsibilities will include designing database objects like tables and views for data modeling, developing and fine-tuning high-performance ETL pipelines using Oracle SQL and PL/SQL, and providing technical leadership by mentoring a team of data engineers. Collaboration with stakeholders to understand data needs, translating them into scalable solutions, ensuring data integrity and security, and implementing continuous improvements will also be part of your role. To excel in this position, you should have 8+ years of hands-on experience in data engineering or a related field, mastery of Oracle SQL with advanced query optimization skills, understanding of data warehousing concepts, and strong programming abilities in PL/SQL. Additionally, you should possess leadership qualities to effectively lead and mentor teams, manage projects, and create a positive work environment. A solid understanding of data warehousing principles, distributed computing architectures, and cloud platforms, along with excellent analytical and problem-solving skills, will be essential. Experience with CI/CD pipelines, DevOps practices, knowledge of real-time data processing frameworks like Kafka, and proficiency in other programming/scripting languages such as Python or Java will be considered a plus. A Bachelor's degree or equivalent experience is required for this role. Please note that this job description offers a high-level overview of the responsibilities involved. Additional job-related duties may be assigned as needed.,

Posted 2 weeks ago

Apply

9.0 - 13.0 years

9 - 13 Lacs

Bengaluru, Bangalaore

Work from Office

Experience in Microsoft SQL Server database development (TSQL) Experience in building SSIS packages Good experience in creation LLDs.Experience delivering solutions utilizing the entire Microsoft BI stack (SSAS, SSIS) Experience with SQL Server/T-SQL programming in creation and optimization of stored procedures, triggers and user defined functions Microsoft SQL Server database development (TSQL) experience Experience working in a data warehouse environment and a strong understanding of dimensional data modeling concepts Must be able to build Business Intelligence solutions in a collaborative, agile development environment Strong understanding of Data Ingestion, Data processing, Orchestration, Parallelization, Transformation and ETL fundamentals Sound knowledge of data analysis using any SQL tools Experience in ADF, Synapse and other Azure components Designs develop, automates, and support complex applications to extract, transform, and load data Should have knowledge of error handling and Performance tuning for Data pipelines SkillSQL, SSIS, ADF, T-SQL, ETL & DW, Good Communication Qualifications Graduate Additional Information Work from office no cell phone policy Job Location

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 2 Lacs

Hyderabad

Hybrid

Responsibilities Design, develop, test, and maintain ETL pipelines to ingest data from various sources (databases, flat files, APIs) Reddit+15Toptal+15DevsData+15 Collaborate with business analysts, data architects, and stakeholders to gather requirements and translate them into technical design Perform data profiling, cleansing, transformation, validation, and mapping based on business rules Optimize ETL jobs for performance, scalability, and reliability; tune SQL queries and pipelines HRBlade+12Airbyte+12Expertia+12 Monitor pipeline execution, handle errors, implement alerting, and troubleshoot failures Document ETL workflows, data lineage, dependencies, and technical specifications Wipro Careers+6Airbyte+6Expertia+6 Maintain data quality checks, implement data governance standards, and support compliance Indeed India+15Airbyte+15HRBlade+15 Provide guidance to junior developers, participate in code reviews, and share best practices Required Skills & Experience SQL expertise : Writing efficient queries, performance tuning, indexing Reddit+4Toptal+4Airbyte+4 ETL tools : Hands-on with SSIS, Informatica, Talend, Apache NiFi, AWS Glue, or similar Reddit+13Qureos+13DevsData+13 Programming/Scripting : Python, Java, C#, Shell scripting, PySpark Data modeling & warehousing : Experience with star/snowflake schemas, MPP databases (Redshift, Snowflake, BigQuery) Indeed India+1Indeed+1 Databases : Proficiency with SQL (e.g., Oracle, SQL Server, PostgreSQL) and NoSQL systems Reddit+8DevsData+8Reddit+8 Data quality & validation : Experience in defining and enforcing data cleansing and validation rules Workflow orchestration : Familiarity with Airflow or similar orchestration tools Indeed+15Wipro Careers+15Toptal+15

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled PySpark Data Engineer, you will be responsible for designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensure data quality, and integrate with diverse data sources. Your role will involve developing, testing, and optimizing PySpark applications to process, transform, and analyze large-scale datasets from various sources such as relational databases, NoSQL databases, batch files, and real-time data streams. You will collaborate with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions. Your key responsibilities will include designing efficient data transformation and aggregation processes, developing error handling mechanisms for data integrity, optimizing PySpark jobs for performance, and working with distributed datasets in Spark. Additionally, you will design and implement ETL processes to ingest and integrate data from multiple sources, ensuring consistency, accuracy, and performance. You should have a Bachelor's degree in Computer Science or a related field, along with 5+ years of hands-on experience in big data development. Proficiency in PySpark, Apache Spark, and ETL development tools is essential for this role. To succeed in this position, you should have a strong understanding of data processing principles, techniques, and best practices in a big data environment. You must possess excellent analytical and problem-solving skills, with the ability to translate business requirements into technical solutions. Strong communication and collaboration skills are also crucial for effectively working with data analysts, data architects, and other team members. If you are looking to drive the development of robust data processing and transformation solutions within a fast-paced, data-driven environment, this role is ideal for you.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a DevOps Engineer for our team based in Europe, you will be responsible for leveraging your skills in Informatica Powercenter and PowerExchange, Datavault modeling, and Snowflake. With over 7 years of experience, you will bring valuable expertise in ETL development, specifically with Informatica Powercenter and Datavault modeling. Your proficiency in DevOps practices and SAFe methodologies will be essential in ensuring the smooth operation of our systems. Moreover, your hands-on experience with Snowflake and DBT will be advantageous in optimizing our data processes. You will have the opportunity to work within a scrum team environment, where your contributions will be vital. If you have previous experience as a Scrum Master or aspire to take on such a role, we encourage you to apply. If you are a detail-oriented professional with a passion for driving efficiency and innovation in a dynamic environment, we would love to hear from you. Please send your profile to contact@squalas.com to be considered for this exciting opportunity.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

kolkata, west bengal

On-site

You are a Data Engineer with 2 to 4 years of experience in Python and PL/SQL. Your primary responsibility is to design, develop, and maintain data pipelines, ETL processes, and database solutions. You will be working on ETL Development & Data Processing, where you will develop, optimize, and maintain ETL pipelines for data ingestion, transformation, and integration. You will handle structured and semi-structured data from various sources and implement data cleansing, validation, and enrichment processes using Python and PL/SQL. In Database Development & Optimization, you will write, debug, and optimize complex SQL queries, stored procedures, functions, and triggers in PL/SQL. Additionally, you will design and maintain database schemas, indexing strategies, and partitioning for performance optimization, ensuring data consistency, quality, and governance across all data sources. Your role also involves Data Engineering & Automation, where you will automate data workflows using Python scripts and scheduling tools like Airflow, Cron, or DBMS_JOB. You will optimize query performance, troubleshoot database-related performance issues, and monitor data pipelines for failures while implementing alerting mechanisms. Collaboration & Documentation are crucial aspects of your job. You will closely collaborate with Data Analysts, Architects, and Business teams to understand data requirements. Documenting ETL processes, database schemas, and data flow diagrams will be part of your responsibilities. You will also participate in code reviews, testing, and performance tuning activities. Your Technical Skills should include strong experience in Python for data processing (Pandas, NumPy, PySpark), expertise in PL/SQL, hands-on experience with ETL tools, and knowledge of relational and non-relational databases. Exposure to Cloud & Big Data technologies like AWS/GCP/Azure, Spark, or Snowflake will be advantageous. Soft Skills such as problem-solving, effective communication, teamwork, and ability to manage tasks independently are essential for this role. This is a Full-time, Permanent position with a Day shift schedule and an in-person work location.,

Posted 3 weeks ago

Apply

6.0 - 11.0 years

11 - 16 Lacs

Gurugram

Work from Office

Project description We are looking for the star Python Developer who is not afraid of work and challenges! Gladly becoming a partner with famous financial institution, we are gathering a team of professionals with wide range of skills to successfully deliver business value to the client. Responsibilities Analyse existing SAS DI pipelines and SQL-based transformations. Translate and optimize SAS SQL logic into Python code using frameworks such as Pyspark. Develop and maintain scalable ETL pipelines using Python on AWS EMR. Implement data transformation, cleansing, and aggregation logic to support business requirements. Design modular and reusable code for distributed data processing tasks on EMR clusters. Integrate EMR jobs with upstream and downstream systems, including AWS S3, Snowflake, and Tableau. Develop Tableau reports for business reporting. Skills Must have 6+ years of experience in ETL development, with at least 5 years working with AWS EMR. Bachelor's degree in Computer Science, Data Science, Statistics, or a related field. Proficiency in Python for data processing and scripting. Proficient in SQL and experience with one or more ETL tools (e.g., SAS DI, Informatica)/. Hands-on experience with AWS servicesEMR, S3, IAM, VPC, and Glue. Familiarity with data storage systems such as Snowflake or RDS. Excellent communication skills and ability to work collaboratively in a team environment. Strong problem-solving skills and ability to work independently. Nice to have N/A

Posted 3 weeks ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Chennai

Work from Office

Job Summary: We are seeking a highly skilled Data Engineer to design, develop, and maintain robust data pipelines and architectures. The ideal candidate will transform raw, complex datasets into clean, structured, and scalable formats that enable analytics, reporting, and business intelligence across the organization. This role requires strong collaboration with data scientists, analysts, and cross-functional teams to ensure timely and accurate data availability and system performance. Key Responsibilities Design and implement scalable data pipelines to support real-time and batch processing. Develop and maintain ETL/ELT processes that move, clean, and organize data from multiple sources. Build and manage modern data architectures that support efficient storage, processing, and access. Collaborate with stakeholders to understand data needs and deliver reliable solutions. Perform data transformation, enrichment, validation, and normalization for analysis and reporting. Monitor and ensure the quality, integrity, and consistency of data across systems. Optimize workflows for performance, scalability, and cost-efficiency. Support cloud and on-premise data integrations, migrations, and automation initiatives. Document data flows, schemas, and infrastructure for operational and development purposes. • Apply best practices in data governance, security, and compliance. Required Qualifications & Skills: Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Proven 6+ Years experience in data engineering, ETL development, or data pipeline management. Proficiency with tools and technologies such as: SQL, Python, Spark, Scala ETL tools (e.g., Apache Airflow, Talend) Cloud platforms (e.g., AWS, GCP, Azure) Big Data tools (e.g., Hadoop, Hive, Kafka) Data warehouses (e.g., Snowflake, Redshift, BigQuery) Strong understanding of data modeling, data architecture, and data lakes. Experience with CI/CD, version control, and working in Agile environments. Preferred Qualifications: • Experience with data observability and monitoring tools. • Knowledge of data cataloging and governance frameworks. • AWS/GCP/Azure data certification is a plus.

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

jaipur, rajasthan

On-site

Job Description Kogta Financial Ltd is seeking an experienced and highly skilled ETL & Data Warehouse Developer with expertise in utilizing AWS services to join our dynamic team. As a key member of our data engineering team, you will be responsible for designing, developing, and optimizing ETL processes and data warehousing solutions on the AWS platform. The ideal candidate should have a solid background in ETL development, data modeling, and a deep understanding of AWS services and hands-on experience in crafting complex SQL queries and optimizing data workflows. Responsibilities ETL Development : Design, develop, and implement robust ETL processes using AWS Glue, AWS Data Pipeline, or custom scripts as needed. Ensure the efficient extraction, transformation, and loading of data from diverse sources into our data warehouse. Data Warehousing Design and maintain data warehouse solutions on AWS, with a focus on scalability, performance, and reliability. Implement and optimize data models for efficient storage and retrieval in AWS Redshift. AWS Service Utilization Leverage AWS services such as S3, Lambda, Glue, Redshift, and others to build end-to-end data solutions. Stay abreast of AWS developments and recommend the adoption of new services to enhance our data architecture. SQL Expertise Craft complex SQL queries to support data analysis, reporting, and business intelligence requirements. Optimize SQL code for performance and efficiency, and troubleshoot any issues related to data retrieval. Performance Optimization Optimize ETL workflows and data warehouse queries to ensure optimal performance. Identify and resolve bottlenecks in data processing and storage. Data Integration Collaborate with cross-functional teams to integrate data from various sources into the data warehouse. Work closely with business stakeholders to understand data requirements. Security And Compliance Implement and maintain security measures to protect data integrity and ensure compliance with industry standards and regulations. Collaborate with the security and compliance teams to implement best practices. Documentation Document ETL processes, data models, and system configurations for future reference. Ensure comprehensive documentation of the developed solutions for knowledge transfer. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as an ETL & Data Warehouse Developer with a focus on AWS services and SQL expertise. Strong proficiency in SQL, stored procedures, and views. In-depth understanding of AWS services related to data processing and storage. Experience with data modeling and designing efficient data warehouses. Familiarity with best practices in data security, compliance, and governance. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Certification in AWS or relevant technologies. (ref:hirist.tech),

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Bangalore/ Bengaluru

Hybrid

Role & Responsibilities: The candidate will have to leverage strong collaboration skills and the ability to independently develop and design highly complex data sets and ETL processes to develop a data warehouse and to ask the right questions. You'd be engaged in a fast-paced learning environment and will be solving problems for the largest organizations in the world mostly Fortune 500. The candidate will also work closely with internal business teams and clients to work on various kinds of Data Engineering related problems like the development of a data warehouse, Advanced Stored Procedures, ETL pipelines, Reporting, Data Governance, and BI development. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, Operations Research, Math, Economics or related discipline Strong SQL, Python, PySpark ETL development, Azure Data Factory and PowerBI knowledge and Hands-on experience Proficient in understanding the Business Requirement and converting it into process flows and codes(Special preference for SQL based stored procedures) Develop and design data architecture and frameworks for optimal performance and response time Strong Analytical skills and the ability to start from ambiguous problem statements, identify and access relevant data, make appropriate assumptions, perform insightful analysis, and draw conclusions relevant to the business problem Excellent communication skills to communicate efficiently (written and spoken) in English. Demonstrated ability to communicate complex technical problems in simple plain stories. Ability to present information professionally & concisely with supporting data. Ability to work effectively & independently in a fast-paced environment with tight deadlines. Ability to engage with cross-functional teams for implementation of project and program requirements. 5+ years of hands-on experience as a Data Engineer or tech lead roles. 5+ years of experience in data engineering on the Azure cloud, highly proficient in the Azure ecosystem and its services."

Posted 3 weeks ago

Apply

12.0 - 15.0 years

9 - 13 Lacs

Hyderabad

Work from Office

About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Your role will require you to navigate complex data environments, providing insights and recommendations that drive effective data management and governance practices. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities and foster a culture of continuous improvement.- Monitor and evaluate the performance of data platform components, making recommendations for enhancements and optimizations. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data architecture principles and best practices.- Experience with data integration techniques and tools.- Familiarity with cloud-based data platforms and services.- Ability to analyze and troubleshoot data-related issues effectively. Additional Information:- The candidate should have minimum 12 years of experience in Data Building Tool.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

9 - 13 Lacs

Hyderabad

Work from Office

About The Role Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data modeling and architecture principles.- Experience with data integration techniques and tools.- Familiarity with cloud-based data platforms and services.- Ability to troubleshoot and resolve data-related issues efficiently. Additional Information:- The candidate should have minimum 5 years of experience in Data Building Tool.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Navi Mumbai

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : Teradata BIMinimum 5 year(s) of experience is required Educational Qualification : minimum 15 years Full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead application development projects- Conduct code reviews and ensure coding standards are met Professional & Technical Skills: - Must To Have Skills: Proficiency in Google BigQuery- Strong understanding of data warehousing concepts- Experience with cloud-based data platforms- Hands-on experience in SQL and database management- Good To Have Skills: Experience with Teradata BI Additional Information:- The candidate should have a minimum of 5 years of experience in Google BigQuery- This position is based at our Mumbai office- A minimum of 15 years Full time education is required Qualification minimum 15 years Full time education

Posted 3 weeks ago

Apply

7.0 - 15.0 years

9 - 13 Lacs

Hyderabad

Work from Office

We are looking for an experienced and highly skilled Lead Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, specifically with Talend, SQL, Data Warehousing (DWH), and Snowflake. This role will involve leading data engineering projects, managing data pipelines, and optimizing data workflows to support business analytics and data-driven decision-making. Key Responsibilities: Lead the design, development, and optimization of scalable data pipelines using Talend for ETL processes. Manage the architecture and performance tuning of data systems, including Snowflake and other cloud-based data platforms. Develop, test, and maintain SQL scripts and queries to extract, transform, and load data into data warehouses. Ensure data integration from various source systems into a centralized Data Warehouse (DWH) while maintaining data quality and integrity. Collaborate with cross-functional teams, including business analysts, data scientists, and stakeholders, to identify and implement data solutions. Lead the optimization of data workflows and the automation of processes to improve efficiency and reduce latency. Mentor and guide junior data engineers and team members on best practices, tools, and techniques in data engineering. Troubleshoot, diagnose, and resolve data-related issues and improve the overall data architecture. Stay up-to-date with the latest trends and advancements in data technologies and methodologies. Key Skills & Qualifications: Talend: Strong hands-on experience with Talend for ETL development and data integration tasks. SQL: Advanced proficiency in SQL for data manipulation, querying, and performance optimization. Snowflake: Deep experience with Snowflake as a cloud-based data platform, including performance tuning and optimization. Data Warehousing (DWH): Strong understanding of Data Warehouse architecture, design, and management. Proven experience with data modeling, data migration, and data transformation. Strong knowledge of cloud data platforms, preferably AWS, Azure, or Google Cloud. Excellent problem-solving skills, with the ability to troubleshoot complex data issues. Effective communication and leadership skills, with experience in leading cross-functional teams. Familiarity with Agile methodologies is a plus. Education and Experience: Bachelors degree in Computer Science, Engineering, Information Technology, or a related field (Master's preferred). Minimum of 5+ years of experience in data engineering, with at least 2 years in a leadership or senior role. Hands-on experience with Talend, SQL, Snowflake, and Data Warehousing in large-scale, high-volume environments.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Encore Software Services is looking for ETL Developer to join our dynamic team and embark on a rewarding career journey Consulting with data management teams to get a big-picture idea of the companys data storage needs. Presenting the company with warehousing options based on their storage needs. Designing and coding the data warehousing system to desired company specifications. Conducting preliminary testing of the warehousing environment before data is extracted. Extracting company data and transferring it into the new warehousing environment. Testing the new storage system once all the data has been transferred. Troubleshooting any issues that may arise. Providing maintenance support. Real-time data Ingestion, Streaming data, Kafka, AWS Cloud streaming tools, ETL, Semi-structured data formats like JSON, XML Tools: Talend, Kafka, AWS Event Bridge, Lamda and and Strong SQL & Python

Posted 3 weeks ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Mumbai

Work from Office

To help the project by owning the critical data transformation and integration processes. This would help faster and simpler development and maintenance. The profile is someone with excellent Abinitio experience who can quickly adapt and deliver. The Developer will have following responsibilities: Analyse & estimate the requirement. Write detailed technical analysis with impacts (technical/functionally) Design & develop high quality code Unit test and provide support during implementation Bug fixing & performance optimization Contributing Responsibilities Support to Service Delivery team for Production issue. Technical & Behavioral Competencies Solid experience as an RDBMS developer, with stored procedures, query performance tuning and ETL Design ETL Framework for audit and data reconciliation to manage batch and real time interfaces Develop ETL jobs for automation, monitoring and responsible for job performance optimization through the use of ETL development tools or custom developed procedures Set up best practices in App Dev team, share best practices amongst teams, share best practices amongst broader MBIM team Specific Qualifications (if required) Skills Referential Behavioural Skills : Ability to collaborate / Teamwork Client focused Critical thinking Decision Making Transversal Skills: Analytical Ability Ability to develop and adapt a process Ability to develop and leverage networks Ability to develop others & improve their skills Analytical Ability Education Level: Master Degree or equivalent Other/Specific Qualifications (if required) Knowledge in Git Knowledge in Devops (CI/CD)

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies