Home
Jobs

13457 Etl Jobs - Page 44

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.5 years

6 - 7 Lacs

Chennai

Remote

Job Title: PL/SQL Developer Chennai, OMR CTC: 6.5 to 7LPA Interview Mode : 1) Virtual Interview 2) Telephonic Interview 3) Face to Face Interview Gender: Male Mandatory Skillset: ETL Datamigration SQL Queries Oracle Databased tool – Toad / SQL Developer Key Responsibilities Develop, test, and maintain complex PL/SQL packages, procedures, functions, and triggers for data processing and ETL tasks. Design and implement database schemas and objects such as tables, indexes, and views. Analyze business requirements and translate them into technical solutions using PL/SQL. Optimize SQL queries and database performance for high efficiency. Perform data analysisto support report generation and modify existing reports as needed. Develop migration scripts for data transfer between systems. Ensure compliance with security standardsto protectsensitive data. Provide technicalsupport for production systems, including troubleshooting and resolvingissues. Documenttechnicalspecifications and create reusable code forscalability. Required Skills Technical Skills: Proficiency in Oracle PL/SQL programming with experience in developing stored procedures, functions, and triggers. Strong understanding of relational database concepts(RDBMS) and performance tuningtechniques. Experience with ETL processes and data warehouse integration. Knowledge of advanced PL/SQL features like collections, ref cursors, dynamic SQL, and materialized views. Familiarity with toolslike SQL Developer, Toad, orsimilarIDEs. Exposure to Unix/Linux scripting is a plus. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication skillsto interact with stakeholders and team members effectively. Attention to detail with a focus on accuracy in coding and testing. Ability to work both independently and in a team environment. Qualifications Bachelor’s degree in computerscience, Information Technology, or a related field (or equivalent experience) Proven experience (3.5 to 4+ years) in Oracle PL/SQL development Job Types: Full-time, Internship Pay: ₹600,000.00 - ₹700,000.00 per year Benefits: Paid sick time Paid time off Work from home Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience: 10+ Years Strong experience in delivering data engineering projects with Python. Strong proficiency in Python for data analysis and scripting. Hands-On experience in AWS technologies (Azure ADF, Synapse etc.), Have strong knowledge in ETL, Data warehousing, Business intelligence Proficient in designing and developing data integration workflows. Strong experience with Azure Synapse Analytics for data warehousing. Solid experience with Databricks for big data processing. Experience in managing complex and technical development projects in the areas of ETL, Datawarehouse & BI. Excellent problem-solving skills, strong communication abilities, and a collaborative mindset. Relevant certifications in Azure or data engineering are a plus RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 4 days ago

Apply

5.0 - 8.0 years

3 - 8 Lacs

Chennai

On-site

Data Engineer, Chennai, India About the job: The Data Engineer is a cornerstone of Vendasta’s R&D team, driving the efficient processing, organization, and delivery of clean, structured data in support of business intelligence and decision-making. By developing and maintaining scalable ELT pipelines, they ensure data reliability and scalability, adhering to Vendasta’s commitment to delivering data solutions aligned with evolving business needs. Your Impact: Design, implement, and maintain scalable ELT pipelines within a Kimball Architecture data warehouse. Ensure robustness against failures and data entry errors, managing data conformation, de-duplication, survivorship, and coercion. Manage historical and hierarchical data structures, ensuring usability for the Business Intelligence (BI) team and scalability for future growth. Partner with BI teams to prioritize and deliver data solutions while maintaining alignment with business objectives. Work closely with source system owners to extract, clean, and integrate data into the data warehouse. Advocate for and influence improvements in source data integrity. Champion best practices in data engineering, including governance, lineage tracking, and quality assurance. Collaborate with Site Reliability Engineering (SRE) teams to optimize cloud infrastructure usage. Operate within an Agile framework, contributing to team backlogs via Kanban or Scrum processes as appropriate. Balance short-term deliverables with long-term technical investments in collaboration with BI and engineering management. What you bring to the table: 5 - 8 years of proficiency in ETL, SQL and experience with cloud-based platforms like Google Cloud (BigQuery, DBT, Looker). In-depth understanding of Kimball data warehousing principles, including the 34-subsystems of ETL. Strong problem-solving skills for diagnosing and resolving data quality issues. Ability to engage with BI teams and source system owners to prioritize and deliver data solutions effectively. Eagerness to advocate for data integrity improvements while respecting the boundaries of data mesh principles. Ability to balance immediate needs with long-term technical investments. Understanding of cloud infrastructure for effective resource management in partnership with SRE teams. About Vendasta: So what do we actually do? Vendasta is a SaaS company composed of a company of global brands including MatchCraft, Yesware, and Broadly, that builds and sells software and services to help small businesses operate more efficiently as a team, meet more client needs, and provide incredible client experiences. We have offices in Saskatoon, Saskatchewan, Boston and Boca Raton, Florida, and Chennai, India. Perks: Benefits of health insurance Paid time off Training & Career Development: Professional development plans, leadership workshops, mentorship programs, and more! Free Snacks, hot beverages, and catered lunches on Fridays Culture - comprised of our core values: Drive, Innovation, Respect, and Agility Night Shift Premium Provident Fund

Posted 4 days ago

Apply

0.0 - 2.0 years

4 - 5 Lacs

Chennai

On-site

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Description To support our extraordinary teams who build great products and contribute to our growth, we’re looking to hire Systems Analyst – IT who is based out of in Chennai location reporting to Manager & the role involves, What a typical day looks like: Take ownership of user problems and be proactive when dealing with user issues. Monitor all alerts and report any issue that may significantly impact the business. Monitoring, analyzing, troubleshooting problems, providing code fixes and testing Create/develop/utilize application monitoring solutions to enhance application availability and performance in production Conduct root-cause analysis as and when needed and propose a corrective action plan Follow established set of processes while handling issues and for proper escalation of unresolved issues to appropriate internal teams Responsible for reports, requests (like RFP), Maintaining the SLA (Service Level Agreement) and account payable records Maintain logs of all issues and ensure resolutions according to quality assurance tests for all production processes. Ensure on-time delivery of all assigned tasks – incidents, problem tickets, etc. Adhere to the defined support delivery process/guidelines like Problem Management, Incident Management, Change Management, SLA Compliance, Productivity, other application goals, etc. Identify and learn more about the software used/ supported by the organization. Develop/propose innovative approaches on process improvements & automation possibilities Work independently to be able to communicate effectively with users, and development and support teams during downtimes and when there are questions or issues to be addressed Need to work with Shift members and ensure KPI are met within team as well as team. Need to work on automation and ticket reduction Need to collaborate with stakeholders and ensure there is bug free system is available for end users The experience we’re looking to add to our team: Qualifications: Bachelor’s or master’s in engineering Overall, 0 to 2 years of experience in production support area (L1 and L2 and L3) Basic Knowledge in SQL, SSIS / SSRS. Intermediate knowledge in Dot.Net or Python or Power Apps. Data Management/ETL projects Ability to lead the efficiency improvement ideas and provide required leadership to the team for the same Exhibit a strong sense of urgency for high severity incidents Highly motivated and eager to be part of the Systems team Interest in display technology A passion for finding root cause of problems, even when process is tedious Basic ability to edit dot net scripts a plus Ready to work in rotational shift and shift timings will be from 7am to 4pm & 2pm to 11pm &10.30pm to 07.30am also weekend shift (monthly once). Strong communication, presentation and writing skills What you’ll receive for the great work you provide: Health Insurance Paid Time Off #BB04 Job Category IT Flex pays for all costs associated with the application, interview or offer process, a candidate will not be asked for any payment related to these costs. Flex does not accept unsolicited resumes from headhunters, recruitment agencies or fee based recruitment services. Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. Flex does not discriminate in employment opportunities or practices based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status or any other status protected by law. Flex provides reasonable accommodation so that qualified applicants with a disability may participate in the selection process. Please advise us of any accommodations you request to express interest in a position by e-mailing: accessibility@flex.com . Please state your request for assistance in your message. Only reasonable accommodation requests related to applying for a specific position within Flex will be reviewed at the e-mail address. Flex will contact you if it is determined that your background is a match to the required skills required for this position. Thank you for considering a career with Flex.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Chennai

On-site

Do you want to work on complex and pressing challenges—the kind that bring together curious, ambitious, and determined leaders who strive to become better every day? If this sounds like you, you’ve come to the right place. Your Impact As a Data Engineer I at McKinsey & Company, you will play a key role in designing, building, and deploying scalable data pipelines and infrastructure that enable our analytics and AI solutions. You will work closely with product managers, developers, asset owners, and client stakeholders to turn raw data into trusted, structured, and high-quality datasets used in decision-making and advanced analytics. Your core responsibilities will include: Developing robust, scalable data pipelines for ingesting, transforming, and storing data from multiple structured and unstructured sources using Python/SQL. Creating and optimizing data models and data warehouses to support reporting, analytics, and application integration. Working with cloud-based data platforms (AWS, Azure, or GCP) to build modern, efficient, and secure data solutions. Contributing to R&D projects and internal asset development. Contributing to infrastructure automation and deployment pipelines using containerization and CI/CD tools. Collaborating across disciplines to integrate data engineering best practices into broader analytical and generative AI (gen AI) workflows. Supporting and maintaining data assets deployed in client environments with a focus on reliability, scalability, and performance. Furthermore, you will have opportunity to explore and contribute to solutions involving generative AI, such as vector embeddings, retrieval-augmented generation (RAG), semantic search, and LLM-based prompting, especially as we integrate gen AI capabilities into our broader data ecosystem. Your Growth Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your qualifications and skills Bachelor’s degree in computer science, engineering, mathematics, or a related technical field (or equivalent practical experience). 3+ years of experience in data engineering, analytics engineering, or a related technical role. Strong Python programming skills with demonstrated experience building scalable data workflows and ETL/ELT pipelines. Proficient in SQL with experience designing normalized and denormalized data models. Hands-on experience with orchestration tools such as Airflow, Kedro, or Azure Data Factory (ADF). Familiarity with cloud platforms (AWS, Azure, or GCP) for building and managing data infrastructure. Discernable communication skills, especially around breaking down complex structures into digestible and relevant points for a diverse set of clients and colleagues, at all levels. High-value personal qualities including critical thinking and creative problem-solving skills; an ability to influence and work in teams. Entrepreneurial mindset and ownership mentality are must; desire to learn and develop, within a dynamic, self-led organization. Hands-on experience with containerization technologies (Docker, Docker-compose). Hands on experience with automation frameworks (Github Actions, CircleCI, Jenkins, etc.). Exposure to generative AI tools or concepts (e.g., OpenAI, Cohere, embeddings, vector databases). Experience working in Agile teams and contributing to design and architecture discussions. Contributions to open-source projects or active participation in data engineering communities.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Chennai

On-site

The Testing Analyst 2 is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Responsibilities: Supports initiatives related to User Acceptance Testing (UAT) process and product rollout into production. Testing specialists who work with technology project managers, UAT professionals and users to design and implement appropriate scripts/plans for an application testing strategy/approach. Software quality assurance testing. Conducts a variety of quality control user acceptance tests. Analysis to ensure that applications meet or exceed specified standards and end-user requirements. Creates test scripts; executes test scripts according to application requirements documentation; logs defects. Retests software corrections to ensure problems are resolved. Documents evolution of testing cases scripts for future replication. Contributes to the development of test plans. Interfaces with development teams if clarification is needed on requirements. Exhibits good understanding of procedures and concepts within own technical area and a basic knowledge of these elements in other areas. Requires a good understanding of how the team interacts with others in accomplishing the objectives of the area. Makes evaluative judgments based on the analysis of factual information; resolves problems by identifying and selecting solutions through the application of acquired technical experience and guided by precedents. Has limited but direct impact on the team and closely related teams through the quality of the tasks services provided. Exchanges ideas and information in a concise and logical way; recognizes audience diversity. Performs other duties and functions as assigned. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2+ years relevant testing experience preferred Basic knowledge of relationship data bases. Knowledge of applications supporting the testing process Demonstrated analytical skills& ability to work independently on assigned tasks Experience in software application testing Education: Bachelor’s/University degree or equivalent experience 1. Hands on SQL and ETL experience 2. Any test automation or programming knowledge 3. Quality engineering process knowledge - Job Family Group: Technology - Job Family: Technology Quality - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai

On-site

P2-C2-STS Examining the business needs to determine the testing technique by automation testing. Maintenance of present regression suites and test scripts is an important responsibility of the tester. The testers must attend agile meetings for backlog refinement, sprint planning, and daily scrum meetings. Testers to execute regression suites for better results. Must provide results to developers, project managers, stakeholders, and manual testers. Develop and execute test plans, test cases, and test scripts for ETL processes. Validate data extraction, transformation, and loading workflows Analyze test results and provide detailed reports to stakeholders. Automate repetitive testing tasks to improve efficiency. Strong SQL base to validate the transformations. Skill Proficiency Level expected Strong ETL Testing Strong SQL - In depth understanding of SQL queries and applying it in QA Testing. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

5.0 years

7 - 15 Lacs

Ahmedabad

On-site

We are accepting applications for experienced Data Engineer with a strong background in data scraping, cleaning, transformation, and automation. The ideal candidate will be responsible for building robust data pipelines, maintaining data integrity, and generating actionable dashboards and reports to support business decision-making. Key Responsibilities: Develop and maintain scripts for scraping data from various sources including APIs, websites, and databases. Perform data cleaning, transformation, and normalization to ensure consistency and usability across all data sets. Design and implement relational and non-relational data tables and frames for scalable data storage and analysis. Build automated data pipelines to ensure timely and accurate data availability. Create and manage interactive dashboards and reports using tools such as Power BI, Tableau, or similar platforms. Write and maintain data automation scripts to streamline ETL (Extract, Transform, Load) processes. Ensure data quality, governance, and compliance with internal and external regulations. Monitor and optimize the performance of data workflows and pipelines. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Minimum of 5 years of experience in a data engineering or similar role. Proficient in Python (especially for data scraping and automation), and strong hands-on experience with Pandas, NumPy , and other data manipulation libraries. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL) and data warehouses (e.g., Redshift, Snowflake, BigQuery). Familiarity with data visualization tools like Power BI, Tableau, or Looker. Knowledge of ETL tools and orchestration frameworks such as Apache Airflow, Luigi, or Prefect . Experience with version control systems like Git and collaborative platforms like Jira or Confluence . Strong understanding of data security, privacy , and governance best practices. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with NoSQL databases like MongoDB, Cassandra, or Elasticsearch. Understanding of CI/CD pipelines and DevOps practices related to data engineering. Job Type: Full-Time (In-Office) Work Days: Monday to Saturday Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

About The Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Role: Data Engineer Key Skill: Pyspark, Cloudera Data Platfrorm, Big data - Hadoop, Hive, Kafka Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Technical Skills 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Show more Show less

Posted 4 days ago

Apply

8.0 - 12.0 years

0 Lacs

Noida

On-site

Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Specialist, Data Architecture What does a successful Lead, Data Conversions do? A Conversion Lead is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible to provide data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Lead plays a critical role in mapping in data to support project initiatives for new and existing banks. Leads provide a specialized service to the Project Manager teams—developing custom reporting, providing technical assistance, and ensuring project timelines are met. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. What you will do A Conversion Lead is responsible for timely and accurate conversion of new and existing Bank/Client data to Fiserv systems, from both internal and external sources. This role is responsible to provide data analysis for client projects and to accommodate other ad hoc data updates to meet client requests. As part of the overall Service Delivery organization, a Conversion Lead plays a critical role in mapping in data to support project initiatives for new and existing banks/clients. Lead provides a specialized service to the Project Manager teams—developing custom reporting, providing technical assistance, and ensuring project timelines are met. Working with financial services data means a high priority on accuracy and adherence to procedures and guidelines. The person stepping in as the backup would need to review the specifications history and then review and understand the code that was being developed to resolve the issue and or change. This would also have to occur on the switch back to the original developer. Today, the associate handling the project would log back in to support the effort and address the issue and or change. What you will need to have Bachelor’s degree in programming or related field Working Hours (IST): 12:00 p.m. – 09:00 p.m. (IST) Monday through Friday Highest attention to detail and accuracy Team player with ability to work independently Ability to manage and prioritize work queue across multiple workstreams Strong communication skills and ability to provide technical information to non-technical colleagues What would be great to have Experience with Data Modelling, Informatica, Power BI, MS Visual Basic, Microsoft Access and Microsoft Excel required. Experience with Card Management systems, debit card processing is a plus Understanding Applications and related database features that can be leveraged to improve performance Experience of creating testing artifacts (test cases, test plans) and knowledge of various testing types. 8 – 12 years’ Experience and strong knowledge of MS SQL/PSQL, MS SSIS and Data warehousing concepts Should have strong database fundamentals and Expert knowledge in writing SQL commands, queries and stored procedures Experience in Performance Tuning of SQL complex queries. Strong communication skills and ability to provide technical information to non-technical colleagues. Ability to mentor junior team members Ability to manage and prioritize work queue across multiple workstreams. Team player with ability to work independently. Experience in full software development life cycle using agile methodologies. Should have good understanding of Agile methodologies and can handle agile ceremonies. Efficient in Reviewing, Analyzing, coding, testing, and debugging of application programs. Should be able to work under pressure while resolving critical issues in Prod environment. Good communication skills and experience in working with Clients. Good understanding in Banking Domain. Minimum 8 years’ relevant experience in data processing (ETL) conversions or financial services industry Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.

Posted 4 days ago

Apply

0 years

7 - 9 Lacs

Noida

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets: Spark, Pyspark, Azure Preferred skill sets: Spark, Pyspark, Azure Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 4 days ago

Apply

0 years

0 Lacs

Noida

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities : Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill set s: Spark, Pyspark , Azure Preferred skill sets : Spark, Pyspark , Azure Years of experience required : 8 - 12 Education qualification : B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 4 days ago

Apply

0 years

3 - 7 Lacs

Noida

On-site

Req ID: 328481 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Business Intelligence Senior Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Work as ETL developer using Oracle SQL , PL/SQL, Informatica, Linux Scripting , Tidal ETL code development, unit testing, source code control, technical specification writing and production implementation. Develop and deliver ETL applications using Informatica , Teradata, Oracle , Tidal and Linux. Interact with BAs and other teams for requirement clarification, query resolution, testing and Sign Offs Developing software that conforms to a design model within its constraints Preparing documentation for design, source code, unit test plans. Ability to work as part of a Global development team. Should have good knowledge of healthcare domain and Data Warehousing concepts About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Uttar Pradesh

On-site

About the Job: We are seeking a skilled Report Writer/SQL Developer to join our Analytics team. This role is pivotal in transforming complex data into clear, actionable reports and dashboards that support business decision making. You will work closely with product, engineering, and business stakeholders to develop scalable and accurate reporting solutions What will you do? Design, develop, and optimize complex SQL queries, views, and stored procedures for reporting and analytics. Build and maintain robust reports and dashboards using tools such as Dundas BI, Power BI, Tableau, or Cognos. Translate business requirements into efficient and scalable reporting solutions. Develop and maintain data models (star/snowflake schema) to support reporting needs. Ensure high data quality and consistency across environments through validation and quality checks. Support ad hoc data requests and report development for stakeholders. Document report specifications, data sources, logic, and workflows. Participate in data governance and contribute to standardization and best practices. Collaborate in Agile development environments with cross-functional teams. What We are looking for : Bachelor's degree in Computer Science, Information Systems, Engineering, or a related field. 7+ years of experience as a Report Writer, BI Developer, or SQL Developer. Advanced proficiency in SQL (MySQL, PostgreSQL, or similar RDBMS). Experience developing and maintaining reports using BI tools such as Dundas BI, Power BI, Tableau, or Cognos. Strong knowledge of data modeling techniques and relational database design. Familiarity with ETL processes, data warehousing concepts, and performance tuning. Exposure to cloud platforms (Azure, AWS) is a plus. Experience working in Agile/Scrum environments. Strong analytical and problem-solving skills Excellent communication skills and ability to work in a team environment.

Posted 4 days ago

Apply

2.0 years

0 Lacs

Noida

On-site

About the Role: We are seeking talented and detail-oriented Data Engineers with expertise in Informatica MDM to join our fast-growing data engineering team. Depending on your experience, you’ll join as a Software Engineer or Senior Software Engineer, contributing to the design, development, and maintenance of enterprise data management solutions that support our business objectives. As a key player, you will be responsible for building reliable data pipelines, working with master data management, and ensuring data quality, governance, and integration across systems. Responsibilities: Design, develop, and implement data pipelines using ETL tools like Informatica PowerCenter, IICS, etc., and MDM solutions using Informatica MDM. Develop and maintain batch and real-time data integration workflows. Collaborate with data architects, business analysts, and stakeholders to understand data requirements. Perform data profiling, data quality assessments, and master data matching/merging. Implement governance, stewardship, and metadata management practices. Optimize the performance of Informatica MDM Hub, IDD, and associated components. Write complex SQL queries and stored procedures as needed. Senior Software Engineer – Additional Responsibilities: Lead design discussions and code reviews; mentor junior engineers. Architect scalable data integration solutions using Informatica and complementary tools. Drive adoption of best practices in data modeling, governance, and engineering. Work closely with cross-functional teams to shape the data strategy. Required Qualifications: Software Engineer: Bachelor’s degree in Computer Science, Information Systems, or related field. 2–4 years of experience with Informatica MDM (Customer 360, Business Entity Services, Match/Merge rules). Strong SQL and data modeling skills. Familiarity with ETL concepts, REST APIs, and data integration tools. Understanding of data governance and quality frameworks. Senior Software Engineer: Bachelor’s or Master’s in Computer Science, Data Engineering, or related field. 4+ years of experience in Informatica MDM, with at least 2 years in a lead role. Proven track record of designing scalable MDM solutions in large-scale environments. Strong leadership, communication, and stakeholder management skills. Hands-on experience with data lakes, cloud platforms (AWS, Azure, or GCP), and big data tools is a plus. Preferred Skills (Nice to Have): Experience with other Informatica products (IDQ, PowerCenter). Exposure to cloud MDM platforms or cloud data integration tools. Agile/Scrum development experience. Knowledge of industry-standard data security and compliance practices. Job Type: Full-time Pay: ₹219,797.43 - ₹1,253,040.32 per year Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Noida, Uttar Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you willing to start immediately(Preferred)? Experience: Data warehouse: 3 years (Required) Informatica MDM: 3 years (Required) Work Location: In person

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

Vadodara, Gujarat, India

On-site

Linkedin logo

This job requires individuals to be focused, structured and independent. You will work with Internal Stakeholders globally and with colleagues from other departments, hence required proactive approach and excellent communication skills. You must be patient and thorough in your work. Responsibilities / Tasks Design, develop, and maintain interactive dashboards and reports using Power BI. Use DAX (Data Analysis Expressions) and Power Query for data modeling and transformation. Develop and maintain Power Apps applications to automate workflows and enhance business processes. Utilize Power Automate to streamline data integration and process automation. Hands-on experience with database systems like MySQL or SQL Server Write optimized SQL queries for data extraction, transformation, reporting and manage data from relational databases. Create Python-based applications/scripts to enhance data workflows. Design, build and launch reliable data pipelines to move data to the Data Lake or Warehouse, to enable effective reporting and visualization. Knowledge of cloud platforms (for example Azure or similar). Collaborate with business stakeholders to gather requirements and translate them into technical solutions. Identify business challenges and propose data-driven strategies for process optimization and efficiency. Communicate findings, insights, and recommendations to stakeholders in a clear and effective manner. Work closely with IT and business teams to improve data governance and reporting standards. Handling confidential information responsibly. Strong problem-solving and analytical skills. Experience in in Agile/Scrum environments. Your Profile / Qualifications Bachelor’s degree in data science, Statistics, Computer Science, Mathematics, Economics, or a related field having minimum 4-8 years of experience. Proven working experience as a Data Analyst or Business Data Analyst. Strong analytical and mathematical skills to help collect, measure, organize and analyze data. Profound knowledge of data modeling and how to extract, transform & load data (ETL). Proficiency in Power BI (DAX, Power Query, and data visualization techniques). Hands-on experience with Power Apps (Canvas and Model-driven apps). Strong command of SQL databases (writing queries, stored procedures, indexing). Proficiency in Python and ability to work with Pandas, NumPy, and data visualization tools if required. Understanding of data modeling, ETL processes, and relational database structures. Strong attention to detail and ability to work with large datasets. Approach works individually and in teams with optimism and solution-oriented Agile mindset. Committed to course of action to achieve goals and deliverables according to the tasks in pipeline. Cross Culture Intelligence. This position requires working effectively with multiple cultures around the world. Fluent in English (Verbal and written) Did we spark your interest? Then please click apply above to access our guided application process. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Greetings from Rentokil PCI ! We are pleased to announce a walk-in interview for the role of Business Intelligence Analyst - Goregaon at Rentokil PCI , a leading organization committed to delivering excellence. This is a great opportunity to join a dynamic team and grow your career in a fast-paced, technology-driven environment. Walk-in Interview Details Position: Business Intelligence Analyst Experience Required: 2+ to 4 years of proven experience in a Business Intelligence Analyst role. Date: Monday 16 June 2025 Time: 11:00 AM to 2:00 PM Venue: Rentokil PCI, Pest Control Pvt. Ltd.3 Floor,'Narayani, Ambabai Temple, Compound, Aarey Rd, near Bank of Maharashtra, Goregaon West, Mumbai, Maharashtra 400062 Important Information Candidates with strong English communication skills will be preferred, especially those currently based in Western line of Mumbai. A minimum of 2+ years max 4 years of experience as a Business Intelligence Analyst is required. We are looking for immediate joiners or those with a short notice period. Please carry your updated resume and attend the interview in formal attire. About Rentokil PCI Rentokil PCI is the leading pest control service provider in India. A Rentokil Initial brand, Rentokil PCI was formed in 2017 through a joint venture (JV) between Pest Control India, the number one pest control company in India, and Rentokil, the world's leading pest control brand. Rentokil PCI aims to set new standards for customer service having operations across 300 locations in India. For more details: https://www.rentokil-pestcontrolindia.com About The Role The Business Intelligence is responsible for working within the BI team to deliver reporting and dashboard solutions that meet the needs of the organisation. The developer must work well in a team setting and have excellent organisational, prioritisation, communication, and time management skills. The successful candidate will demonstrate accountability, flexibility and adaptability to handle multiple and changing priorities and be able to successfully collaborate with development teams, technology groups, consultants, and key stakeholders. The person will report to the Manager - Application Support. The incumbent will have to work as part of a multi-functional team and this involves collaboration with the internal team and external stakeholders. Job Responsibilities Develop and manage BI solutions Analyse business processes and requirements Create and maintain documentation including requirements, design and user manuals Conduct unit testing and troubleshooting Develop and execute database queries and conduct analyses Identify development needs in order to improve and streamline operations Identify opportunities to improve processes and strategies with technology solutions Key Result Areas Ensure quality and accuracy of data assets and analytic deliverables Troubleshooting business intelligence modelling issues and developing solutions within the timelines Query resolution Enhancing application knowledge to implement new solutions On time deployment of different projects as per the business requirements On time creation and analysis of visualisations and reports Competencies (Skills Essential To The Role) Strong analytical skills Ability to solve practical problems and deal with a variety of concrete variables in situations where only limited standardisation exists Ability to think logically and troubleshoot issues Excellent interpersonal (verbal and written) communication skills are required to support working in project environments that includes internal, external and customer teams Why Join Rentokil PCI? Rentokil PCI is a recognized leader in the pest control and hygiene industry, committed to delivering excellence and ensuring customer satisfaction. By joining our team, you will have the opportunity to advance your career in a dynamic, fast-paced environment, with continuous learning and development at the forefront of our culture. If you meet the requirements and are interested, we would be delighted to meet you at the walk-in interview. Contact Person: Hitesha Patel Contact Number : 8828018709 Email ID : hiteshav.patel@rentokil-pci.com We look forward to seeing you there! ✔ Review our website for better understanding: https://www.rentokil-pestcontrolindia.com 👥 Spread the word! If you know someone suitable for this role, feel free to tag them or share this post. 👉 Join us and be a part of a team that's committed to delivering service excellence and building healthier environments. Requirements Educational Qualification / Other Requirement: Graduate Degree in Computer Science, Information Technology 2 to 4 years Experience working on BI platform, DataStudio, Any Cloud Platform, Qlik & worked on big quires. Strong SQL development skills with in-depth knowledge of complex SQL queries and good understanding of QlikSense. Good working knowledge of SSIS, SSRS, SSAS and proper workflow design and exception management Experience in Data Warehouse, ETL, Cube and Report design and development Role Type / Key Working Relationships Individual Contributor Internal team External stakeholders Benefits What can you expect from RPCI? ➔ Our Values Lie At The Core Of Our Mission And Vision. We Believe That It's Our People Who Make Our Company What It Is. We Believe In Safety Integrity Innovation Learning & Development Open & Transparent Performance Orientation Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Noida

On-site

Posted On: 12 Jun 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description Experience in Data Testing using SQL Knowledge of Python is a plus Domain expertise in Risk and Finance IT Availability to provide overlap with EST hours (until noon EST) Mandatory Competencies ETL - Tester QA Manual - Manual Testing Database - SQL Python - Python Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 4 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description The ideal candidate is an experienced analyst who has demonstrated proficiency in analytics driven business solutions. The person would be a Data resource for the team and would work to generate actionable intelligence and insights for the team through rigorous data analysis and structured reporting, ensuring their efforts are focused in the appropriate areas. The person would deep-dive and bring out pointers that will help bring in continuous improvement/changes in processes from the Loss Prevention standpoint, thereby helping in reducing the losses across Amazon network. They are comfortable in analyzing data from multiple sources to create strategic recommendations in a thoughtful, concise manner and obtaining organizational buy-in at senior levels. They are well-organized, can manage multiple analyses/projects simultaneously, and is intellectually curious. Successful candidates will be expected to demonstrate our leadership principles: a bias for action, deep-dive, ownership and customer-obsession. Key job responsibilities Key Responsibilities includes Converting data into digestible business intelligence and actionable information Writing high quality SQL codes to retrieve and analyze data. Working with large data sets, automate data extraction, and build monitoring/reporting dashboard Interacting with internal stakeholders to deep-dive outlier events Analyze and solve business problems with focus on understanding root causes and driving forward-looking opportunities Communicating complex analysis and insights to our stakeholders and business leaders, both verbally and in writing. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format Basic Qualifications 1+ years of tax, finance or a related analytical field experience 2+ years of complex Excel VBA macros writing experience Bachelor's degree or equivalent Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Preferred Qualifications Experience working with Tableau Experience using very large datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ATSPL - Karnataka Job ID: A3007302 Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position Overview We are seeking a highly motivated and experienced ETL Tester to join our team and play a crucial role in ensuring the quality and reliability of our data platform and pipelines. You will be responsible for leading the development and implementation of an automated testing framework specifically designed for validating the Extract, Transform, Load (ETL) processes. Responsibilities Design and execute test cases for ETL processes and data integration Validate the accuracy and completeness of data being loaded into the target systems Develop SQL to validate the data such as checking duplicates, null values, truncated values and ensuring correct data aggregations. Run the Jobs using IBM DataStage for ETL Process. Execute test cases through Zephyr. Validate data in Target database according to mapping and business rules. Identify, isolate, and report defects and issues in the ETL processes Develop and maintain test data for ETL testing Run GITHUB commands for Automation. Validate data in OBIEE dashboards and reports against Database. Identify data quality issues in Source data Collaborate with the development team to resolve defects and improve the ETL processes Participate in the design and implementation of automated testing tools and frameworks Actively participate in Status reporting and Agile meetings. Document test results and communicate with stakeholders on the status of ETL testing. Track and report the defects in applications like JIRA and Zephyr. Qualifications Required Skills: Good understanding of Healthcare Domain. Good knowledge of SDLC and STLC with specific expertise of Software Testing. Strong understanding of ETL processes and data warehousing Experience working with large data sets and writing complex SQL queries Testing experiences in DB systems like Oracle, SQL server Experience in Toad for Oracle and DB2 Applications Experience in running ETL jobs, monitoring and debugging. Experience with test case design and execution Understanding of data models, data mapping documents, ETL design, and ETL coding Experience within Agile development environment (Sprint planning, demos and retrospectives, and other sprint ceremonies). Understanding of BI concepts - OLAP vs OLTP Experience in OBIEE reporting Broad knowledge of automated testing and modelling tools. Knowledge of building automation and deployment tools such as Jenkins. Flexible with timings. Excellent analytical skills and innovative problem-solving ability. Possess good communication skills and a very good team player. Required Experience & Education Bachelor’s degree in computer science or related field or higher with minimum 3 years of relevant experience 3- 5 Yrs. experience in Software QA & ETL Testing ETL testers typically work closely with ETL developers, business analysts, and data engineers to understand the ETL requirements, design specifications, and data mappings. They use a combination of manual testing techniques and automated testing tools to perform. ETL testing effectively. Strong SQL skills, data analysis abilities, and a good understanding of data integration concepts are essential for an ETL tester to be successful in their role. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives. Show more Show less

Posted 4 days ago

Apply

15.0 years

0 Lacs

Indore

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education

Posted 4 days ago

Apply

3.0 years

0 Lacs

Jaipur

On-site

Data Engineer Role Category: Programming & Design Job Location: Jaipur, Rajasthan on-site Experience Required: 3–6 Years About the Role We are looking for a highly skilled and motivated Data Engineer to join our team. You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that supports analytics, machine learning, and business intelligence initiatives across the company. Key Responsibilities Design, develop, and maintain robust ETL/ELT pipelines to ingest and process data from multiple sources. Build and maintain scalable and reliable data warehouses, data lakes, and data marts. Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions. Ensure data quality, integrity, and security across all data systems. Optimize data pipeline performance and troubleshoot issues in a timely manner. Implement data governance and best practices in data management. Automate data validation, monitoring, and reporting processes. Required Skills and Qualifications Bachelor's or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Proven experience (X+ years) as a Data Engineer or similar role. Strong programming skills in Python, Java, or Scala. Proficiency with SQL and working knowledge of relational databases (e.g., PostgreSQL, MySQL). Hands-on experience with big data technologies (e.g., Spark, Hadoop). Familiarity with cloud platforms such as AWS, GCP, or Azure (e.g., S3, Redshift, BigQuery, Data Factory). Experience with orchestration tools like Airflow or Prefect. Knowledge of data modeling, warehousing, and architecture design principles. Strong problem-solving skills and attention to detail.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh

Remote

Primary Skills Data Engineer , JD Responsible to lead the team and at the same time canidate should be able to get his/her hands dirty by writing code or contributing to any of the development lifecyle.AWS ,RedshiftEMR Cloud ETL ToolsS3 We are looking for a Senior Consultant with at least 5 years of experience to join our team. The ideal candidate should have strong leadership skills and be able to lead a team effectively. At the same time, the candidate should also be willing to get their hands dirty by writing code and contributing to the development lifecycle. The primary skills required for this role include expertise in AWS Redshift and AWS Native Services, as well as experience with EMR, cloud ETL tools, and S3. The candidate should have a strong understanding of these tools and be able to utilize them effectively to meet project goals. As a Senior Consultant, the candidate will be responsible for providing guidance and support to the team, as well as ensuring the successful completion of projects. This is a hybrid work mode position, requiring the candidate to work both remotely and in the office as needed. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 4 days ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad, India Employment Type: Full-Time; Salaried Compensation: Base Salary, Bonus, Benefits Job Description About Us: At Innovapptive, we are bringing the industrial front-line worker, back-office and assets together. Our platform is the only patented and “Code-Free” connected worker platform for SAP and IBM Maximo and is disrupting and digitizing archaic, tedious, & labor intensive paper-based processes for maintenance, operations, & supply chain. The industrial front-line workers are empowered with a suite of highly reconfigurable mobile apps, while the back-office has real-time visibility into the front-line workforce with better planning, scheduling, adoption monitoring and actionable insights. Some of the world’s largest brands such as Newmont Mining, Dominion Nuclear, Hess, Shell, UNICEF, ConocoPhillips, Reckitt Benckiser are digitally transforming their back-office and front-line industrial worker experiences. We are saving companies millions of dollars by improving their asset uptime, productivity, safety, and talent challenges, while delivering jobs better, faster, cheaper and safer. We are backed by Tiger Global Management, a Global Marquee Fund with over $30 Billion of Assets Under Management (AUM). Tiger Global Management has a reputation of investing and building some of the world's "Unicorn" brands such as Spotify, Netflix, Facebook, LinkedIn, Amazon, Peloton, Harry's, Ola, Flipkart, Freshworks and many more! Recently we have announced our Series B funding led by Vista Equity Partners, a leading global technology investor, with participation from Tiger Global Management, our existing Series A investor. Vista invests in mission-critical software businesses that have a clear purpose and a demonstrated track record of success, such as Innovapptive. Our mission is made possible by Innovapptive’s most important asset: our people. We come together through collaboration and ambition in a team-driven culture. Through the success of our product, we have seen monumental growth in our workforce, and we constantly look for exceptional talent to join us. At Innovapptive, you are challenged with dynamic tasks that drive your professional development and career growth. Join us on our journey to deliver an innovative connected worker experience and to empower 350 Million Industrial Front-Line Workers around the world with the ability to truly harness the power of connected worker experience by improving the working life of a front-line worker and the back-office employee. The Role We are seeking an exceptional Technical Consultant to join our team and play a pivotal role in collaborating with our Customers, Product Managers and professional services teams. As an expert you will work with customers during the different phases to define the requirements, principles and models that guide technology services’ decisions in alignment with customer strategic IT and enterprise goals. You will work closely with cross-functional teams, Customer Solution Architects to design and implement best-in-class Industrial SaaS solutions tailored to this critical domain to advocate the value identified in integration models. You will serve as the trusted advisor for our customers, guiding them through the successful integration, deployment, and architectural alignment of Innovapptive's Connected Worker SaaS Platform with their existing enterprise systems. You will provide advisory integration, support scalable deployments on public/Private cloud platforms and lead end-to-end solutioning for ETL. Your role combines architecture, hands-on implementation, and cross-functional collaboration. If this opportunity excites you, we encourage you to apply even if you do not meet all of the qualifications: How You Will Make an Impact: Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Cloud Architecture & Solution Advisory Engage with customers (Architects/SMEs) to understand their cloud infrastructure and recommend scalable, secure deployment architectures for Innovapptive’s Connected Worker SaaS Solution (on AWS must/Azure/GCP) Guide customers on best practices for SaaS adoption, API consumption, middleware strategy, and cloud security configuration Responsible for defining architecture standards, frameworks and guidelines based on our product tech stack and architecture. Directly participates in the governance process for reviewing specific solutions to ensure they are properly leveraging published frameworks and standards Integration Solutioning Analyse existing ERP/CRM systems (SAP PM/MM, Maximo, Salesforce, Oracle, etc.) and design integration solutions using middleware (CPI, BTP, MuleSoft, PI/PO, Dell Boomi, etc.) Lead integration planning and mapping sessions with client architects and SMEs to identify business-critical data flows and transformation logic ETL & Data Flow Enablement Architect and document ETL pipelines and data transformation rules for clean and consistent integration between Innovapptive and customer systems Ensure reliable and efficient data ingestion using secure APIs, OData, REST, SOAP, or event-based mechanisms Security, IAM & Compliance Implement security best practices for data exchange, including OAuth 2.0, OpenID, TLS/SSL, encryption (AES, RSA), and secure token handling Support identity federation using SSO, MFA, and enterprise IdPs (Azure AD, Okta, etc.) Ensure adherence to GDPR, CCPA, and other data privacy regulations Understanding security monitoring tools and techniques to detect and respond to security incidents. Knowledge of logging and auditing best practices to track system activity and identify anomalies Deployment & Customer Support Support deployment of Innovapptive solutions in customer environments, ensuring performance, stability, and maintainability Guide client teams through testing, go-live planning, and post-deployment support cycles Collaboration & Enablement Act as a liaison between customer technical teams, internal product engineering, and delivery teams Provide enablement and onboarding support to client IT teams and partners Document integration and deployment architecture and conduct knowledge transfer sessions Personality Traits Strong Logical Reasoning: Ability to analyze complex problems, break them down into smaller components, and identify root causes Analytical Thinking: Skill in gathering and interpreting data to draw informed conclusions and make data-driven decisions Problem-Solving: A proactive approach to identifying and resolving issues, developing innovative solutions, and implementing effective strategies Critical Thinking: The ability to question assumptions, evaluate evidence, and consider multiple perspectives to arrive at sound judgments Requirements Gathering: Work closely with customers and internal stakeholders to gather and prioritize maintenance and reliability requirements, translating them into actionable product features Stay Current: Stay up to date with the latest technological trends, emerging technologies, and competitive offerings to ensure our solution remains at the forefront of the field Rapid Iteration and Execution: Champion a culture of speed and agility, driving rapid product iteration and execution. Set high bars for quality, efficiency, and speed-to-market. Break down complex problems into actionable steps and relentlessly prioritize delivering results quickly Leadership Mindset: Lead & mentor Integration, Security team daily and through complex, multi-phased delivery projects and provide hands-on delivery guidance Customer-Centric Mindset: Deeply understand our target customers, their pain points, and needs. Conduct user research, customer interviews, and usability studies to gather insights and validate product decisions. Advocate for the customer throughout the product life cycle and be their voice in the organization Cross-Functional Collaboration: Collaborate closely with engineering, design, marketing, professional services and sales teams to align on product design and development. Foster a culture of collaboration, transparency, and cross-functional excellence. Work closely with engineering to deliver high-quality products on time and within budget Data-Driven Decision Making: Utilize data analytics and metrics to make informed decisions. Monitor key product metrics, conduct A/B testing, and perform user behavior analysis to gain insights. Leverage data to iterate on features, optimize user experiences, and drive product success Startup Mindset: Thrive in a dynamic, fast-paced startup environment. Embrace ambiguity and take ownership of challenges. Display entrepreneurial spirit, innovative thinking, and a willingness to take calculated risks. Be adaptable, resilient, and results oriented What You Bring to the Team: Bachelor’s degree in Computer Science, Information Technology, or related field Deep knowledge of microservices architecture and API strategy development 5 - 8 years in Enterprise SAAS Application Integration and Cloud Architecture Experience in deploying and integrating SaaS platforms with ERP/CRM systems (i.e. SAP PM/MM, Maximo, Oracle, SFDC etc.) Proficient in REST/OData/SOAP web services, middleware (CPI/BTP/ESB/MuleSoft), and scripting tools Hands-on experience with public cloud platforms (AWS Must, Azure, or GCP) Solid grasp of networking/security protocols (HTTPS, IPsec, TLS), IAM (SSO, MFA), and API management Familiarity with integration/security tools like Postman, Swagger, Wireshark, Fiddler, etc Familiarity with data encryption techniques (AES, RSA) to protect sensitive data during transmission and storage. Knowledge of data privacy regulations (GDPR, CCPA) and data protection best practices Proficiency in securing API endpoints with measures like rate limiting, input validation, and output encoding Good to have Experience on SAP as backend, SMP SDK, SAP HANA would be an additional advantage Preferred: Good to have TOGAF certification or equivalent enterprise architecture background Hands-on experience with mobile-first field operations solutions Prior exposure to Connected Worker technologies or frontline digitization Experience working in a fast-paced startup or SaaS environment Soft Skills: Strong problem-solving and analytical thinking in technical and business contexts Excellent communication and client-facing skills Passion for innovation, customer success, and digital transformation Ability to travel is needed to work closely with clients and internal teams Innovapptive does not accept and will not review unsolicited resumes from search firms. Innovapptive is an equal opportunity employer and is committed to a diverse and inclusive workplace. Qualified applicants will receive consideration for employment without regard to race, color, religion or creed, alienage or citizenship status, political affiliation, marital or partnership status, age, national origin, ancestry, physical or mental disability, medical condition, veteran status, gender, gender identity, pregnancy, childbirth (or related medical conditions), sex, sexual orientation, sexual and other reproductive health decisions, genetic disorder, genetic predisposition, carrier status, military status, familial status, or domestic violence victim status and any other basis protected under federal, state, or local laws. Powered by JazzHR AASIxXbDI8 Show more Show less

Posted 4 days ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies