Home
Jobs

10660 Etl Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Azure Data Engineer (6 Years Experience) Location: Remote Employment Type: Full-time Experience Required: 6 years Job Summary: We are seeking an experienced Azure Data Engineer to join our data team. The ideal candidate will have strong expertise in designing and implementing scalable data solutions on Microsoft Azure, with a solid foundation in data integration, data warehousing, and ETL processes. Key Responsibilities: Design, build, and manage scalable data pipelines and data integration solutions in Azure Develop and optimize data lake and data warehouse solutions using Azure Data Lake, Azure Synapse, and Azure SQL Create ETL/ELT processes using Azure Data Factory Implement data modeling and data architecture best practices Collaborate with data analysts, data scientists, and business stakeholders to deliver reliable and efficient data solutions Monitor and troubleshoot data pipelines for performance and reliability Ensure data security, privacy, and compliance with organizational standards Automate data workflows and optimize performance using cloud-native tools Required Skills: 6 years of experience in data engineering roles Strong experience with Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, and Azure SQL Proficiency in SQL, T-SQL, and scripting languages (Python or PowerShell) Hands-on experience with data ingestion, transformation, and orchestration Good understanding of data modeling (dimensional, star/snowflake schema) Experience with version control (Git) and CI/CD in data projects Familiarity with Azure DevOps and monitoring tools Preferred Skills: Experience with Databricks or Spark on Azure Knowledge of data governance and metadata management tools Understanding of big data technologies and architecture Microsoft Certified: Azure Data Engineer Associate (preferred) Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Company Description Renesas is one of the top global semiconductor companies in the world. We strive to develop a safer, healthier, greener, and smarter world, and our goal is to make every endpoint intelligent by offering product solutions in the automotive, industrial, infrastructure and IoT markets. Our robust product portfolio includes world leading MCUs, SoCs, Analog and power products, plus Winning Combination solutions that curate these complementary products. We are a key supplier to the world’s leading manufacturers of electronics you rely on every day; you may not see our products, but they are all around you. Renesas employs roughly 21,000 people in more than 30 countries worldwide. As a global team, our employees actively embody the Renesas Culture, our guiding principles based on five key elements: Transparent, Agile, Global, Innovative, and Entrepreneurial. Renesas believes in, and has a commitment to, diversity and inclusion, with initiatives and a leadership team dedicated to its resources and values. At Renesas, we want to build a sustainable future where technology helps make our lives easier. Join us and build your future by being part of what’s next in electronics and the world. Job Description (Data base analyst)- Sr Sales Operations Analyst Job Summary Looking for database analysts with 7 years’ experience to grow the sales analytical and reporting team. The Analyst will work in the Global Sales- Centralized Analytics and Reporting Ops team to provide on-going support of our business intelligence tools and applications. The Database specialist will be focused on the backend database development in Databricks, Oracle, and SQL Server. The candidate must be able to develop/modify notebooks, procedures, packages, and functions in the database environment. Should be able to create jobs in Databricks. Knowledge of Python is desired. Very strong skills in SQL, analytical queries, procedural processing. Must have strong knowledge of ETL skills and transfer of data between multiple systems. Must be able to independently handle ad hoc user data requests and handle production issues in the data warehouse and reporting environment Good knowledge of Excel preferred. Knowledge of PBI and DAX language preferred. The candidate will focus on designing effective reporting solutions that are scalable, repeatable, meeting the needs of the business users. Develop pipeline for data integration and aggregation; maintain documentation; and accommodating ad-hoc user requests. This role will align with cross-functional groups such as IT, Regional Distribution Team, Regional Sales Ops, Business Units, and Finance. Qualifications Responsibilities: Proficient in relational databases (Databricks, SQL Server, Oracle) Proficient in SQL and ability to modify procedures, notebooks in Databricks, Oracle, SQL Server Proficient in advanced Excel features Ability to debug Power BI dashboards and modifying existing Power BI dashboards. Performing ad-hoc reporting to support the business and help in data-driven decision making. Excellent problem-solving abilities and communication skills Must be willing to work independently and be an excellent team player. Must be willing to support systems after regular work hours. Additional Information Renesas is an embedded semiconductor solution provider driven by its Purpose ‘ To Make Our Lives Easier .’ As the industry’s leading expert in embedded processing with unmatched quality and system-level know-how, we have evolved to provide scalable and comprehensive semiconductor solutions for automotive, industrial, infrastructure, and IoT industries based on the broadest product portfolio, including High Performance Computing, Embedded Processing, Analog & Connectivity, and Power. With a diverse team of over 21,000 professionals in more than 30 countries, we continue to expand our boundaries to offer enhanced user experiences through digitalization and usher into a new era of innovation. We design and develop sustainable, power-efficient solutions today that help people and communities thrive tomorrow, ‘ To Make Our Lives Easier .’ At Renesas, You Can Launch and advance your career in technical and business roles across four Product Groups and various corporate functions. You will have the opportunities to explore our hardware and software capabilities and try new things. Make a real impact by developing innovative products and solutions to meet our global customers' evolving needs and help make people’s lives easier, safe and secure. Maximize your performance and wellbeing in our flexible and inclusive work environment. Our people-first culture and global support system, including the remote work option and Employee Resource Groups, will help you excel from the first day. Are you ready to own your success and make your mark? Join Renesas. Let’s Shape the Future together. Renesas Electronics is an equal opportunity and affirmative action employer, committed to supporting diversity and fostering a work environment free of discrimination on the basis of sex, race, religion, national origin, gender, gender identity, gender expression, age, sexual orientation, military status, veteran status, or any other basis protected by law. For more information, please read our Diversity & Inclusion Statement. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

Job Title: Dot Net Developer Location: Gujarat Experience Required: Minimum 5 years post-qualification Employment Type: Full-Time Department: IT / Software Development Key Responsibilities: Design, develop, and maintain scalable and secure Client-Server and distributed web applications using Microsoft .NET technologies. Collaborate with cross-functional teams (analysts, testers, developers) to implement project requirements. Ensure adherence to architectural and coding standards and apply best practices in .NET stack development. Integrate applications with third-party libraries and RESTful APIs for seamless data sharing. Develop and manage robust SQL queries, stored procedures, views, and functions using MS SQL Server. Implement SQL Server features such as replication techniques, Always ON, and database replication. Develop and manage ETL workflows, SSIS packages, and SSRS reports. (Preferred) Develop OLAP solutions for advanced data analytics. Participate in debugging and troubleshooting complex issues to deliver stable software solutions. Support IT application deployment and ensure smooth post-implementation functioning. Take ownership of assigned tasks and respond to changing project needs and timelines. Quickly adapt and learn new tools, frameworks, and technologies as required. Technical Skills Required: .NET Framework (4.0/3.5/2.0), C#, ASP.NET, MVC Bootstrap, jQuery, HTML/CSS Multi-layered architecture design Experience with RESTful APIs and third-party integrations MS SQL Server – Advanced SQL, Replication, SSIS, SSRS Exposure to ETL and OLAP (added advantage) Soft Skills: Excellent problem-solving and debugging abilities Strong team collaboration and communication skills Ability to work under pressure and meet deadlines Proactive learner with a willingness to adopt new technologies Job Types: Full-time, Permanent Pay: ₹60,000.00 - ₹90,000.00 per month Benefits: Flexible schedule Provident Fund Location Type: In-person Schedule: Fixed shift Experience: .NET: 5 years (Required) Location: Ahmedabad, Gujarat (Required) Work Location: In person Speak with the employer +91 7888499500

Posted 2 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are looking for SSIS Developer having experience in maintaining ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) .. No. of Resources Required: 2 (1 resource with 5+ years exp and 1 resource with 3+ years exp). Please find the below JD for data migration role requirement. Job Description: We are looking for a highly skilled and experienced Senior SSIS Developer to design, develop, deploy, and maintain ETL solutions using Microsoft SQL Server Integration Services (SSIS) . The candidate should have extensive hands-on experience in data migration , data transformation , and integration workflows between multiple systems, including preferred exposure to Oracle Cloud Infrastructure (OCI) . Job Location : Corporate Office, Gurgaon Key Responsibilities: Design, develop, and maintain complex SSIS packages for ETL processes across different environments. Perform end-to-end data migration from legacy systems to modern platforms, ensuring data quality, integrity, and performance. Work closely with business analysts and data architects to understand data integration requirements. Optimize ETL workflows for performance and reliability, including incremental loads, batch processing, and error handling. Schedule and automate SSIS packages using SQL Server Agent or other tools. Conduct root cause analysis and provide solutions for data-related issues in production systems. Develop and maintain technical documentation, including data mapping, transformation logic, and process flow diagrams. Support integration of data between on-premises systems and Oracle Cloud (OCI) using SSIS and/or other middleware tools. Participate in code reviews, unit testing, and deployment support. Education: Bachelor’s degree in Computer Science, Information Technology, or related field (or equivalent practical experience). Required Skills: 3-7 years of hands-on experience in developing SSIS packages for complex ETL workflows . Strong SQL/T-SQL skills for querying, data manipulation, and performance tuning. Solid understanding of data migration principles , including historical data load, data validation, and reconciliation techniques. Experience in working with various source/target systems like flat files, Excel, Oracle, DB2, SQL Server, etc. Good knowledge of job scheduling and automation techniques. Preferred Skills: Exposure or working experience with Oracle Cloud Infrastructure (OCI) – especially in data transfer, integration, and schema migration. Familiarity with on-premises-to-cloud and cloud-to-cloud data integration patterns. Knowledge of Azure Data Factory, Informatica, or other ETL tools is a plus. Experience in .NET or C# for custom script components in SSIS is advantageous. Understanding of data warehousing and data lake concepts. If interested, Kindly revert back with resume along and below mentioned details to amit.ranjan@binarysemantics.com Total Experience.: Years of Experience in SSIS Development: Years of Experience in maintaining ETL Solution using SSIS: Years of Experience in Data Migration / Data transformation, and integration workflows between multiple systems: Years of Experience in Oracle Cloud Infrastructure (OCI) Current Location: Home town: Reason of change: Minimum Joining Time: Regards, Amit Ranjan Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Jole Overview We are seeking a highly skilled and motivated Data Engineer || Gurgaon with 2+ years of experience. If you're passionate about coding, problem-solving, and innovation, we'd love to hear from you! About Us CodeVyasa is a mid-sized product engineering company that works with top-tier product/solutions companies such as McKinsey, Walmart, RazorPay, Swiggy , and others. We are about 550+ people strong and we cater to Product & Data Engineering use-cases around Agentic AI, RPA, Full-stack and various other GenAI areas. Key Responsibilities: Design, build, and manage scalable data pipelines using Azure Data Factory and PySpark . Lead data warehousing and lakehouse architecture initiatives to support advanced analytics and BI use cases. Collaborate with stakeholders to understand data requirements and translate them into technical solutions. Build and maintain insightful dashboards and reports using Power BI . Mentor junior team members and provide technical leadership across data projects. Ensure best practices in data governance, quality, and security. Must-Have Skills: 2–7 years of experience in data engineering and analytics. Strong hands-on experience with Azure Data Factory , PySpark , and Power BI . Deep understanding of Data Warehousing concepts and Data Lakehouse architecture. Proficient in data modeling, ETL/ELT processes, and performance tuning. Strong problem-solving and communication skills. Why Join CodeVyasa? Work on innovative, high-impact projects with a team of top-tier professionals. Continuous learning opportunities and professional growth. Flexible work environment with a supportive company culture. Competitive salary and comprehensive benefits package. Free healthcare coverage. Here's a glimpse of what life at CodeVyasa looks like Life at CodeVyasa. Show more Show less

Posted 2 days ago

Apply

2.0 - 5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Be a part of India’s largest and most admired news network! Network18 is India's most diversified Media Company in the fast growing Media market. The Company has a strong Heritage and we possess a strong presence in Magazines, Television and Internet domains. Our brands like CNBC, Forbes and Moneycontrol are market leaders in their respective segments. The Company has over 7,000 employees across all major cities in India and has been consistently managed to stay ahead of the growth curve of the industry. Network 18 brings together employees from varied backgrounds under one roof united by the hunger to create immersive content and ideas. We take pride in our people, who we believe are the key to realizing the organization’s potential. We continually strive to enable our employees to realize their own goals, by providing opportunities to learn, share and grow. Role Overview: We are seeking a passionate and skilled Data Scientist with over a year of experience to join our dynamic team. You will be instrumental in developing and deploying machine learning models, building robust data pipelines, and translating complex data into actionable insights. This role offers the opportunity to work on cutting-edge projects involving NLP, Generative AI, data automation, and cloud technologies to drive business value. Key Responsibilities: Design, develop, and deploy machine learning models, with a strong focus on NLP (including advanced techniques and Generative AI) and other AI applications. Build, maintain, and optimize ETL pipelines for automated data ingestion, transformation, and standardization from various sources Work extensively with SQL for data extraction, manipulation, and analysis in environments like BigQuery. Develop solutions using Python and relevant data science/ML libraries (Pandas, NumPy, Hugging Face Transformers, etc.). Utilize Google Cloud Platform (GCP) services for data storage, processing, and model deployment. Create and maintain interactive dashboards and reporting tools (e.g., Power BI) to present insights to stakeholders. Apply basic Docker concepts for containerization and deployment of applications. Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions. Stay abreast of the latest advancements in AI/ML and NLP best practices. Required Qualifications & Skills: 2 to 5 years of hands-on experience as a Data Scientist or in a similar role. Solid understanding of machine learning fundamentals, algorithms, and best practices. Proficiency in Python and relevant data science libraries. Good SQL skills for complex querying and data manipulation. Demonstrable experience with Natural Language Processing (NLP) techniques, including advanced models (e.g., transformers) and familiarity with Generative AI concepts and applications. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications & Skills: Familiarity and hands-on experience with Google Cloud Platform (GCP) services, especially BigQuery, Cloud Functions, and Vertex AI. Basic understanding of Docker and containerization for deploying applications. Experience with dashboarding tools like Power BI and building web applications with Streamlit. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Knowledge of data warehousing concepts and schema design. Experience in designing and building ETL pipelines. Disclaimer: Please note Network18 and related group companies do not use the services of vendors or agents for recruitment. Please beware of such agents or vendors providing assistance. Network18 will not be responsible for any losses incurred. “We correspond only from our official email address” Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greeting from Infosys BPM Ltd., Exclusive Women's Walkin drive We are hiring for Content and Technical writer, ETL DB Testing, ETL Testing Automation, .NET, Python Developer skills. Please walk-in for interview on 20th June 2025 at Chennai location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-215140 Interview details Interview Date: 20th June 2025 Interview Time: 10 AM till 1 PM Interview Venue:TP 1/1, Central Avenue Techno Park, SEZ, Mahindra World City, Paranur, TamilNadu Please find below Job Description for your reference: Work from Office*** Rotational Shifts Min 2 years of experience on project is mandate*** Job Description: Content and Technical writer Develop high-quality technical documents, including user manuals, guides, and release notes. Collaborate with cross-functional teams to gather requirements and create accurate documentation. Conduct functional testing and manual testing to ensure compliance with FDA regulations. Ensure adherence to ISO standards and maintain a clean, organized document management system. Strong understanding of Infra domain Technical writer that can convert complex technical concepts into easy to consume documents for the targeted audience. In addition, will also be a mentor to the team with technical writing. Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. Job Description: .Net Should have worked on .Net development/implementation/Support project Must have experience in .NET, ASP.NET MVC, C#, WPF, WCF, SQL Server, Azure Must have experience in Web services, Web API, REST services, HTML, CSS3 Understand Architecture Requirements and ensure effective Design, Development, Validation and Support activities. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: AWS Data Engineer JOB LOCATION : Chennai, Indore , Pune EXPERIENCE REQUIREMENT : 5+ Required Technical Skill: Strong Knowledge of Aws Glue/AWS REDSHIFT/SQL/ETL. Good knowledge and experience in Pyspark for forming complex Transformation logic. AWS Data Engineer, SQL,ETL, DWH , Secondary : AWS Glue , Airflow Must-Have Good Knowledge of SQL , ETL A minimum of 3 + years' experience and understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS. Work well independently as well as within a team Good knowledge in working with various AWS services including S3, Glue, DMS, Redshift. Proactive, organized, excellent analytical and problem-solving skills Flexible and willing to learn, can-do attitude is key Strong verbal and written communication skills Good-to-Have Good knowledge of SQL ,ETL ,understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS. Good knowledge in working with various AWS services including S3, Glue, DMS, Redshift Responsibility: AWS Data Engineer Pyspark / Python / SQL / ETL A minimum of 3 + years' experience and understanding of Python core concepts, and implementing data pipeline frameworks using PySpark and AWS Good knowledge of SQL, ETL and also working with various AWS services including S3, Glue, DMS, Redshift Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Company Name: Nectarbits Pvt Ltd Location: Gota, Ahmedabad Experience: 5+ years Job Description Responsible for the successful delivery and closure of multiple projects. Facilitates team activities, including daily stand-up meetings, grooming, sprint planning, demonstrations, release planning, and team retrospectives. Ensure team is aware of tasks and delivery dates. Developing project scopes and objectives, involving all relevant stakeholders and ensuring technical feasibility. To identify scope creep to ensure projects are on track, as well as judge commercial viability and actionable steps. Led sprint planning sessions and periodic conference calls with clients and senior team members to agree on the prioritization of projects and tasks. Be a central point of contact, and responsible for the projects handled and provide transparency & collaboration with different teams To represent the teams needs and requirements to the client to ensure timelines and quality delivery are practically achievable. Build a trusting and safe environment where problems can be raised and resolved. Understanding clients business and processes to provide effective solutions as a technology consultant. Report and escalate to management as needed. Quick learner and implementor of learning path for the Have : Must have hands-on development experience in Qa Automation & managing large-scale projects. Must have experience in managing new development projects with at least 8 to 10 people team with a duration of 6+ months (excluding ongoing support and maintenance projects/tasks), developing the project & release plan, adhering to the standard processes of the organization. Excellent verbal, and written communication skills with both technical and non-technical customers Strong understanding of architecture, design, and implementation of technical solutions. Extremely fluent in REST/SOAP APIs with JSON/XML. Experience in ETL is a plus. A good understanding of N-tier and Microservice architecture. Well-versed in Agile development methodology, and all its ceremonies. Excellent problem-solving/troubleshooting skills, particularly about anticipating and solving problems, issues, risks, or concerns before they become critical Prepare a clear and effective communications plan, and ensure proactive communication of all relevant information to the customer and to all stakeholders Experience in creating Wireframes and/or Presentation to effectively convey technology solutions to To Have : Assess and work with the sales team to create and review proposals, and contracts delivered to determine a proper project plan Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Bangalore & Gurugram YOE - 7+ years We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. You’ll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering. Interested candidate can submit details at https://forms.office.com/r/g2h52X7Bt9 Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Karnataka, India

On-site

Linkedin logo

Job Title: Senior Product Manager Location: Onsite - Bangalore, India Shift Timing: UK Shift (01:30 PM IST - 10:30 PM IST) About the Role: We are looking for a Senior Product Manager to take ownership of inbound product management for key enterprise SaaS offerings. This role involves driving product strategy, defining and managing requirements, coordinating with engineering and cross-functional teams, and ensuring timely, customer-focused releases. If you are passionate about solving real-world problems through innovative product development in the cloud and data integration space, we'd love to connect. Roles & Responsibilities: Lead the execution of the product roadmap, including go-to-market planning, product enhancements, and launch initiatives Translate market needs and customer feedback into detailed product requirements and specifications Conduct competitive analysis, assess industry trends, and define effective product positioning and pricing Collaborate with engineering teams to deliver high-quality solutions within defined timelines Create and maintain Product Requirement Documents (PRDs), functional specs, use cases, and internal presentation materials Evaluate build vs. buy options and engage in strategic partnerships to deliver comprehensive solutions Work closely with marketing to build sales enablement tools—product datasheets, pitch decks, whitepapers, and more Act as a domain expert by providing product training to internal teams such as sales, support, and services Join client interactions (calls and demos) to gather insights, validate solutions, and support adoption Ensure alignment between product vision, business goals, and technical feasibility throughout development cycles Skills & Qualifications: Minimum 5+ years of experience in product management for SaaS or enterprise software products Proven track record in delivering inbound-focused product strategy and leading full product lifecycles Experience with data integration, ETL, or cloud-based data platforms is highly desirable Strong working knowledge of cloud platforms like AWS, GCP, Azure, or Snowflake Familiarity with multi-tenant SaaS architectures and tools like Salesforce, NetSuite, etc Demonstrated ability to work in Agile environments with distributed development teams Exceptional analytical, communication, and stakeholder management skills Ability to prioritize effectively in fast-paced, evolving environments Bachelor's degree in Computer Science, Business Administration, or a related field. MBA preferred Experience in working with international teams or global product rollouts is a plus Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Test Engineer Location: Hyderabad (Onsite) Experience Required: 5 Years Job Description: We are looking for a detail-oriented and skilled Test Engineer with 5 years of experience in testing SAS applications and data pipelines . The ideal candidate should have a solid background in SAS programming , data validation , and test automation within enterprise data environments. Key Responsibilities: Conduct end-to-end testing of SAS applications and data pipelines to ensure accuracy and performance. Write and execute test cases/scripts using Base SAS, Macros, and SQL . Perform SQL query validation and data reconciliation using industry-standard practices. Validate ETL pipelines developed using tools like Talend, IBM Data Replicator , and Qlik Replicate . Conduct data integration testing with Snowflake and use explicit pass-through SQL to ensure integrity across platforms. Utilize test automation frameworks using Selenium, Python, or Shell scripting to increase test coverage and reduce manual efforts. Identify, document, and track bugs through resolution, ensuring high-quality deliverables. Required Skills: Strong experience in SAS programming (Base SAS, Macro) . Expertise in writing and validating SQL queries . Working knowledge of data testing frameworks and reconciliation tools . Experience with Snowflake and ETL validation tools like Talend, IBM Data Replicator, Qlik Replicate. Proficiency in test automation using Selenium , Python , or Shell scripts . Solid understanding of data pipelines and data integration testing practices. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from Tata Consulting Services TCS is Hiring for Python Developer Experience : 5-10 years Location: Pune/Hyderabad/Bangalore/Chennai/Kochi/Bhubaneswar Please find the JD below Required Technical Skill - ETL Development experience. Must Have - 4+ years of experience in Python Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Job title : Data Engineer Experience: 5–8 Years Location: Remote Shift: IST (Indian Standard Time) Contract Type: Short-Term Contract Job Overview We are seeking an experienced Data Engineer with deep expertise in Microsoft Fabric to join our team on a short-term contract basis. You will play a pivotal role in designing and building scalable data solutions and enabling business insights in a modern cloud-first environment. The ideal candidate will have a passion for data architecture, strong hands-on technical skills, and the ability to translate business needs into robust technical solutions. Key Responsibilities Design and implement end-to-end data pipelines using Microsoft Fabric components (Data Factory, Dataflows Gen2). Build and maintain data models , semantic layers , and data marts for reporting and analytics. Develop and optimize SQL-based ETL processes integrating structured and unstructured data sources. Collaborate with BI teams to create effective Power BI datasets , dashboards, and reports. Ensure robust data integration across various platforms (on-premises and cloud). Implement mechanisms for data quality , validation, and error handling. Translate business requirements into scalable and maintainable technical solutions. Optimize data pipelines for performance and cost-efficiency . Provide technical mentorship to junior data engineers as needed. Required Skills Hands-on experience with Microsoft Fabric : Dataflows Gen2, Pipelines, OneLake. Strong proficiency in Power BI , including semantic modeling and dashboard/report creation. Deep understanding of data modeling techniques: star schema, snowflake schema, normalization, denormalization. Expertise in SQL , stored procedures, and query performance tuning. Experience integrating data from diverse sources: APIs, flat files, databases, and streaming. Knowledge of data governance , lineage, and data catalog tools within the Microsoft ecosystem. Strong problem-solving skills and ability to manage large-scale data workflows. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled ETL Tester with hands-on experience in validating data pipelines and data transformations in an AWS-based ecosystem . The ideal candidate should have a strong background in ETL testing, a solid understanding of data warehousing concepts, and proficiency with tools and services in AWS like S3, Redshift, Glue, Athena, and Lambda. Key Responsibilities: Design and execute ETL test cases for data ingestion, transformation, and loading processes. Perform data validation and reconciliation across source systems, staging, and target layers (e.g., S3, Redshift, RDS). Understand data mappings and business rules; write SQL queries to validate transformation logic. Conduct end-to-end testing including functional, regression, and performance testing of ETL jobs. Work closely with developers, data engineers, and business analysts to identify and troubleshoot defects . Validate data pipelines orchestrated through AWS Glue, Step Functions, and Lambda functions . Utilize Athena and Redshift Spectrum for testing data stored in S3. Collaborate using tools like JIRA, Confluence, Git, and CI/CD pipelines . Prepare detailed test documentation including test plans, test cases, and test summary reports. Required Skills: 3–4 years of experience in ETL/Data Warehouse testing . Strong SQL skills for data validation across large datasets. Working knowledge of AWS services such as S3, Redshift, Glue, Athena, Lambda, CloudWatch. Experience testing batch and streaming data pipelines . Familiarity with Python or PySpark is a plus for data transformation or test automation. Experience in using ETL tools (e.g., Informatica, Talend, or AWS Glue ETL scripts). Knowledge of Agile/Scrum methodology . Understanding of data quality frameworks and test automation practices . Good to Have: Exposure to BI tools like QuickSight, Tableau, or Power BI. Basic understanding of data lake and data lakehouse architectures . Experience in working with JSON, Parquet , and other semi-structured data formats. Show more Show less

Posted 2 days ago

Apply

3.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Hiring Across all the levels . Location- Pune Preferred/ Gurgaon Mandatory Skillset Validated systems testing expertise Validated Systems Testing and end to end test management expertise + Strong client engagement Validated Systems Testing and end to end test management expertise Responsibilities: Contribute to project’s overall Computer System Validation deliverables. Create and execute test scenarios; select the best testing methodologies, techniques and evaluation criteria for testing. Draft, review and approve validation deliverables such as user requirements, technical design specifications, IQ/OQ/PQ scripts and reports, error logs, configuration document, traceability matrix and document 21 CFR Part 11 and EU Annex 11 compliance. Build automation scripts and help the leads in the designing/configuring the test automation frameworks. Understand the various testing activities: Unit, System, Component, User Acceptance, Performance, Integration and Regression. Participate in user story mapping, sprint planning, estimation, and feature walk-throughs. Experience in leading junior test analysts, assigning and tracking tasks provided to team members. Manage end to end testing / validation lifecycle on applications likes - Solution Manager, JIRA, and HP ALM (desired). Self - motivated, team-oriented individual with strong problem-solving abilities Evaluate risks, assess closure requirements, and process change controls for computerized systems. Define key test processes, best practices, KPIs, collateral. Well verse with ETL or automation testing. Qualifications: Bachelor's/master’s degree in engineering, Science, Medical or related field. Hands-on experience in Computer System Validation of Applications and Infrastructure Qualification. A minimum of 3.5-11 years of experience in computer systems validation and hands on experience within GxP (GCP/GMP regulated environment (FDA, EU, MHRA)). Experience in managing and leading testing related activities. Experience in creating test scenarios and testing protocols (IQ/OQ/PQ/UAT) within the various SDLC or Agile phases as per the standard GxP protocol. In-depth understanding of defect management processes. Strong SQL skills for data validation and development of expected results. Hands on Test Management Tool – JIRA and HP ALM. Experience in defining risk-based strategies for validation of computerized systems and author review end-to-end CSV documentation in accordance with various test plans. Understanding CSV and project related SOP’s and WI’s. Well versed in Good Documentation Practices. Show more Show less

Posted 2 days ago

Apply

0.0 - 6.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Experience- 6+ years Work Mode- Hybrid Job Summary: We are seeking a skilled Informatica ETL Developer with 5+ years of experience in ETL and Business Intelligence projects. The ideal candidate will have a strong background in Informatica PowerCenter , a solid understanding of data warehousing concepts , and hands-on experience in SQL, performance tuning , and production support . This role involves designing and maintaining robust ETL pipelines to support digital transformation initiatives for clients in manufacturing, automotive, transportation, and engineering domains. Key Responsibilities: Design, develop, and maintain ETL workflows using Informatica PowerCenter . Troubleshoot and optimize ETL jobs for performance and reliability. Analyze complex data sets and write advanced SQL queries for data validation and transformation. Collaborate with data architects and business analysts to implement data warehousing solutions . Apply SDLC methodologies throughout the ETL development lifecycle. Support production environments by identifying and resolving data and performance issues. Work with Unix shell scripting for job automation and scheduling. Contribute to the design of technical architectures that support digital transformation. Required Skills: 3–5 years of hands-on experience with Informatica PowerCenter . Proficiency in SQL and familiarity with NoSQL platforms . Experience in ETL performance tuning and troubleshooting . Solid understanding of Unix/Linux environments and scripting. Excellent verbal and written communication skills. Preferred Qualifications: AWS Certification or experience with cloud-based data integration is a plus. Exposure to data modeling and data governance practices. Job Type: Full-time Pay: From ₹1,000,000.00 per year Location Type: In-person Schedule: Monday to Friday Ability to commute/relocate: Hyderabad, Telangana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): What is your current CTC? What is your expected CTC? What is your current location? What is your notice period/ LWD? Are you comfortable attending L2 F2F interview in Hyderabad? Experience: Informatica powercenter: 5 years (Required) total work: 6 years (Required) Work Location: In person

Posted 2 days ago

Apply

15.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Title : Technical Architect - Data Governance & MDM Experience: 15+ years Location: Mumbai/Pune/Bangalore Role Overview: The Technical Architect specializing in Data Governance and Master Data Management (MDM) designs, implements, and optimizes enterprise data solutions. The jobholder has expertise in tools like Collibra, Informatica, InfoSphere, Reltio, and other MDM platforms, ensuring data quality, compliance, and governance across the organization. Responsibilities: Architect and optimize strategies for data quality, metadata management, and data stewardship. Design and implement data governance frameworks and MDM solutions using tools like Collibra, Informatica, InfoSphere, and Reltio. Develop strategies for data quality, metadata management, and data stewardship. Collaborate with cross-functional teams to integrate MDM solutions with existing systems. Establish best practices for data governance, security, and compliance. Monitor and troubleshoot MDM environments for performance and reliability. Provide technical leadership and guidance to data teams. Stay updated on advancements in data governance and MDM technologies. Key Technical Skills. 10+ years of experience working on DG/MDM projects Strong on Data Governance concepts Hands-on different DG tools/services Hands-on Reference Data, Taxonomy Strong understanding of Data Governance, Data Quality, Data Profiling, Data Standards, Regulations, Security Match and Merge strategy Design and implement the MDM Architecture and Data Models Usage of Spark capabilities Statistics to deduce meanings from vast enterprise level data Different data visualization means of analyzing huge data sets Good to have knowledge of Python/R/Scala languages Experience on DG on-premise and on-cloud Understanding of MDM, Customer, Product, Vendor Domains and related artifacts Experience of working on proposals, customer workshops, assessments etc is preferred Must have good communication and presentation skills Technology Stack - Collibra, IBM MDM, Reltio, Infosphere Eligibility Criteria 15+ years of total experience. Bachelor’s degree in Computer Science, Data Management, or a related field. Proven experience as a Technical Architect in Data Governance and MDM. Certifications in relevant MDM tools (e.g., Collibra Data Governance, Informatica / InfoSphere / Reltio MDM, ). Experience with cloud platforms like AWS, Azure, or GCP. Proficiency in tools like Collibra, Informatica, InfoSphere, Reltio, and similar platforms. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Interested candidates can apply directly. Alternatively, you can also send your resume to ansari.m@atos.net Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Work Level : Individual Core : Responsible Leadership : Team Alignment Industry Type : Information Technology Function : Database Administrator Key Skills : PLSQL,SQL Writing,mSQL Education : Graduate Note: This is a requirement for one of the Workassist Hiring Partner. Primary Responsibility: Write, optimize, and maintain SQL queries, stored procedures, and functions. This is a Remote Position. Assist in designing and managing relational databases. Perform data extraction, transformation, and loading (ETL) tasks. Ensure database integrity, security, and performance. Work with developers to integrate databases into applications. Support data analysis and reporting by writing complex queries. Document database structures, processes, and best practices. Requirements Currently pursuing or recently completed a degree in Computer Science, Information Technology, or a related field. Strong understanding of SQL and relational database concepts. Experience with databases such as MySQL, PostgreSQL, SQL Server, or Oracle. Ability to write efficient and optimized SQL queries. Basic knowledge of indexing, stored procedures, and triggers. Understanding of database normalization and design principles. Good analytical and problem-solving skills. Ability to work independently and in a team in a remote setting. Preferred Skills (Nice to Have) Experience with ETL processes and data warehousing. Knowledge of cloud-based databases (AWS RDS, Google BigQuery, Azure SQL). Familiarity with database performance tuning and indexing strategies. Exposure to Python or other scripting languages for database automation. Experience with business intelligence (BI) tools like Power BI or Tableau. What We Offer Fully remote internship with flexible working hours. Hands-on experience with real-world database projects. Mentorship from experienced database professionals. Certificate of completion and potential for a full-time opportunity based on performance. Company Description Workassist is an online recruitment and employment solution platform based in Lucknow, India. We provide relevant profiles to employers and connect job seekers with the best opportunities across various industries. With a network of over 10,000+ recruiters, we help employers recruit talented individuals from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and Legal. We have adapted to the new normal and strive to provide a seamless job search experience for job seekers worldwide. Our goal is to enhance the job seeking experience by leveraging technology and matching job seekers with the right employers. For a seamless job search experience, visit our website: https://bit.ly/3QBfBU2 (Note: There are many more opportunities apart from this on the portal. Depending on the skills, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

19 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

Greeting from Altimetrik !!! We are looking for a highly skilled and experienced Data Engineer join our dynamic team. Technical Skills & Qualifications: Strong understanding of data engineering and dimensional design fundamentals, good at SQL, integration (ETL), front-end analysis / data visualization, learns new technologies quickly. Strong in SQL & Hands on coding skills using Python and Shell scripts. Designing and developing ETL pipelines across multiple platforms and tools including Spark, Hadoop and AWS Data Services. Develop, test and maintain high-quality software using Python programming language and Shell Designing and developing schema definitions and support data warehouse/mart to enable integration of disparate data sources from within Intuit and outside, aggregate it and make it available for analysis. Support large data volumes and accommodate flexible provisioning of new sources. Contribute to the design and architecture of project across the data landscape. Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions. Write clean and reusable code that can be easily maintained and scaled. Gathering functional requirements, developing technical specifications, and project & test planning. Work with business users to develop and refine analytical requirements forquantitative data (view-through, clickstream, acquisition, product usage,transactions), qualitative data (survey, market research) and unstructured data(blog, social network). Gathering functional requirements, developing technical specifications, and project & test planning. Familiarity with on-call support and issue fix before SLA breach Experience with Agile Development, SCRUM, or Extreme Programming methodologies. Helps to align to overall strategies and reconcile competing priorities across organization. Educational Qualification: Bachelors degree in Engineering or Masters degree ( PG). Exp : 5 to 10 yrs Mandatory Skills : Must have & Should be strong - SQL , ETL , Spark ,Hive, Data Ware House/Datamart design Python / Scala / Shell Scripting Good in scripting Should have AWS / Azure Good to have - Streaming ( kafka .. etc) Gen-AI Skill Notice period : Immediate joiner or JULY month Joiner If interested , Please share the below details in mail to reach you Email id :sranganathan11494@altimetrik.com Total years of experience: Experience relevant to Pyspark / Python : Relevant experience in SQL : Experience in Datawarehousing: Experience in AWS : Current CTC : Expected CTC: Notice Period: Company name: Contact No: Contact email id: Current Location : Preferred Location : Are you willing to work 2 days Work from office ( Bangalore): Thanks R Sasikala

Posted 2 days ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are a Rakuten Group company, providing global B2B services for the mobile telco industry and enabling next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are now taking our mobile offering global! To support our ambitions to provide an innovative cloud-native telco platform for our customers, we are looking to recruit and develop top talent from Digital Product Management. Let’s build the future of mobile telecommunications together! We are a Rakuten Group company, providing global B2B/B2C services for the mobile telco industry and enabling next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are now taking our mobile offering global! To support our ambitions to provide an innovative cloud-native telco platform for our customers, we are looking to recruit and develop top talent from Digital Product Management. Let’s build the future of mobile telecommunications together! Role : Technical Program manager You will independently lead cross-organisation programs, influencing the roadmap priorities and technical direction across teams. You will work with stakeholders across the organisation and own the communication of all aspects of the program including surfacing risks and progress towards the goal. You will guide the team towards technical solutions and make trade-off decisions. You will drive program management best practices across the organisation. The role requires closely working with the multiple functional teams (including but not limited to Business, Architects, Engineering, Operation support etc ) in building and maintaining program delivery timelines, unblocking teams, defining, and streamlining cross-functional dependencies along with increasing efficiency and velocity of project execution. You would likely spend most of the days in Agile, Kanban, or other project planning tools and scheduling meetings with relevant stakeholders to make sure projects keep moving forward to deliver a program execution strategy and timeline, as well as regular reporting of project health to stakeholders throughout a project’s life cycle. Team : RBSS Delivery organization Skills and Qualification Upto 15 years of hands-on technical project/program management experience with at least 10+ years of program managing /working in Scrums Must have Telecom Background with exposure on working with Telcom operators / ISP ( B2B, B2C customer solutions ) in software delivery / integration for at least 5+ years in BSS domain. Technology stack : Managed complex Data migration projects involving technologies such as Cloud ( AWS, GCP or compatible ), Microservices, Various DB solution (Oracle, MySQL, Couchbase, Elastic DB, Camunda etc ) ,Data streaming technologies ( such as Kafka) and tools associated with the technology stack Excellent Knowledge of Project Management Methodology and Software Development Life Cycles including Agile with excellent client-facing and internal communication skills. Ability to plan, organize, prioritize, and deliver multiple projects simultaneously. In-depth-knowledge and understanding of Telecom BSS business needs with the ability to establish/maintain high level of customer trust and confidence with Solid organizational skills including attention to detail and multitasking skills. Good to understanding of the challenges associated with BSS business and understanding of high level modules( CRM, Order Management , Revenue mgmt. and Billing services ) Excellent verbal, written, and presentation skills to effectively communicate complex technical and business issues (and solutions) to diverse audiences Strong analytical, planning, and organizational skills with an ability to manage competing demands Always curious about various issues/items. Have passion to learn continuously in a fast- moving environment Strong working knowledge of Microsoft Office, Confluence, JIRA, etc. Good to have: Project Management Professional (PMP) / Certified Scrum Master certification Good to have: knowledge of external solutions integrated with ETL software, Billing, Warehouse/supply chain related migrations projects Key job responsibilities Manage/Streamline the program planning by evaluating the incoming project demand across multiple channels against available capacity Regularly define and review KPI ‘s for proactively seek out new and improved mechanisms for visibility ensuring your program stays aligned with organization objectives Develop and Maintain Kanban boards /workstream dashboards Work with stakeholders during entire life cycle of the program, Execute Project requirements, Prepare detailed project plan, identify risks, manage vendor / vendor resources, measure program metrics and take corrective and preventive actions Ability to adopt Agile best practices ( such as estimation techniques) and define and optimize the processes is essential Coordinate with the product Management team to Plan Features and Stories into sprints, understand business priorities, align required stakeholders to make sure the team is able to deliver the expected outcome Manage Technology Improvements and other enhancements from conceptualization to delivery, have deep understanding of their impact, pros/cons, work through required detail, collaborate with all stakeholders till its successfully deployed in production Manage and Deliver Planned RBSS releases by working with customers .Work with Scrum masters, plan Scrum capacity, manage productivity of the teams Monitoring progress of the software developed by scrum teams, quality of the deliverables Working with engineering & product teams to scope product delivery, define solution strategies and understand development alternatives, as well as support Ensure availability to the team to answer questions and deliver direction. Work across multiple teams and vendors (cross-cutting across programs, business/engineering teams, and/or technologies) to drive delivery strategy & dependency management ensuring active delivery and pro-active communications Forecast and manage infrastructure and Resourcing demand against the operational growth of the platform in collaboration with engineering teams Delivering Agile projects that offer outstanding business value to the users. Supporting the stakeholders in implementing an effective project governance system. “Rakuten is committed to cultivating and preserving a culture of inclusion and connectedness. We are able to grow and learn better together with a diverse team and inclusive workforce. The collective sum of the individual differences, life experiences, knowledge, innovation, self-expression, and talent that our employees invest in their work represents not only part of our culture, but our reputation and Rakuten’s achievement as well. In recruiting for our team, we welcome the unique contributions that you can bring in terms of their education, opinions, culture, ethnicity, race, sex, gender identity and expression, nation of origin, age, languages spoken, veteran’s status, color, religion, disability, sexual orientation and beliefs” Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: QA Tester Data Job Type: Full-time Location: On-site - Hyderabad, Pune or New Delhi Job Summary: Join our customer’s team as a dedicated ETL Tester where your expertise will drive the quality and reliability of crucial business data solutions. As an integral part of our testing group, you will focus on ETL Testing while engaging in automation, API, and MDM testing to support robust, end-to-end data validation and integration. We value professionals who demonstrate strong written and verbal communication and a passion for delivering high-quality solutions. Key Responsibilities: Design, develop, and execute comprehensive ETL test cases, scenarios, and scripts to validate data extraction, transformation, and loading processes. Collaborate with data engineers, business analysts, and QA peers to clarify requirements and ensure accurate data mapping, lineage, and transformations. Perform functional, automation, API, and MDM testing to support a holistic approach to quality assurance. Utilize tools such as Selenium to drive automation efforts for repeatable and scalable ETL testing processes. Identify, document, and track defects while proactively communicating risks and issues to stakeholders with clarity and detail. Work on continuous improvement initiatives to enhance test coverage, efficiency, and effectiveness within the ETL testing framework. Create and maintain detailed documentation for test processes and outcomes, supporting both internal knowledge sharing and compliance requirements. Required Skills and Qualifications: Strong hands-on experience in ETL testing, including understanding of ETL tools and processes. Proficiency in automation testing using Selenium or similar frameworks. Experience in API testing, functional testing, and MDM testing. Excellent written and verbal communication skills, with an ability to articulate technical concepts clearly to diverse audiences. Solid analytical and problem-solving abilities to troubleshoot data and process issues. Attention to detail and a commitment to high-quality deliverables. Ability to thrive in a collaborative, fast-paced team environment on-site at Hyderabad. Preferred Qualifications: Prior experience working in large-scale data environments or within MDM projects. Familiarity with data warehousing concepts, SQL, and data migration best practices. ISTQB or related QA/testing certification. Show more Show less

Posted 2 days ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies