Jobs
Interviews

5858 Data Warehousing Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 16.0 years

0 Lacs

karnataka

On-site

As an Oracle Reporting & Analytics (FDI) Lead, you will play a crucial role in our team by utilizing your expertise in Oracle ERP Cloud and EBS Technologies analytics reporting. Your responsibilities will encompass a deep understanding of ERP data structure across various modules such as PPM, Finance, Procurement, HCM, and Sales. Collaboration with stakeholders from business, IT peers, consultants, and developers will be essential to ensure the successful delivery of Oracle Fusion Applications solutions that align with our business requirements. Your guidance will be pivotal in the technical design and development of Oracle Fusion reports to achieve optimal performance and scalability. Your daily tasks will involve gathering business requirements, designing specifications, developing code, conducting unit, integration, and acceptance testing, as well as providing implementation support. Additionally, you will be responsible for constructing data models to support analytical reporting needs, creating reports, dashboards, and setting up applications based on client requirements for managing data models and data pipelines. Building custom data pipelines, integrating with other warehouse applications, optimizing dashboards, and assisting with FDI provisioning and configurations will also fall within your scope of responsibilities. To excel in this role, you should hold a Bachelor's degree in computer science or a related field with over 12 years of experience in Data Analytics & reporting tools, specializing in Oracle FDI, Oracle Analytical Cloud (OAC), or OBIEE Analytics Reporting tools. Hands-on experience in implementing FDI (FAW) and BI Analytics using Oracle Analytics Cloud (OAC) and Autonomous Data Warehouse (ADW), along with a minimum of one implementation experience in Fusing Data Intelligence, will be highly advantageous. Your ability to develop custom data pipelines, integrate with warehouse applications, and work on custom Data Warehouse projects using ETL tools like ODI/ODI Marketplace on OCI infrastructure will be crucial for success. Moreover, possessing a good functional understanding of Oracle ERP Modules such as Finance, PPM, Sales, and HCM, and being well-versed in Fusion Data Model, Data Augmentation, semantic model extensions, and FDI roles are essential requirements for this role. Proficiency in SQL Query, Analytical Custom Reports/Dashboards, and debugging skills is expected. As a committed team player, you will also be responsible for leading and mentoring the team to achieve organizational goals effectively. Preferred skills for this role include experience in OBIEE, ERP cloud BIP, BICC, OCI DI, and OTBI Reporting, as well as knowledge of Fusion PPM, CX, and HCM core modules, and GL Balance Cubes, FRS/Smart View.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

The Supply Chain Planning Data Scientist role at HP involves collecting, cleaning, preprocessing, and analyzing large datasets to derive meaningful insights and actionable recommendations. You will be responsible for creating clear and effective visual representations of data using charts, graphs, and dashboards to communicate findings to technical and non-technical audiences, empowering teams across the organization to work more efficiently and effectively. As a member of a team of data science engineers, you will be involved in the investigation, design, development, execution, and implementation of data science projects to generate new insights, products, technologies, and intellectual property. You will create plans, data collection and analysis procedures, and data insight visualizations for assigned projects, collaborating with internal and external partners to perform experiments and validations in accordance with the overall plan. In this role, you will identify and drive scalable solutions for building and automating reports, data pipelines, and dashboards to monitor and report on operational performance metrics. Collaboration with cross-functional teams to understand business requirements and develop data-driven solutions is a key aspect of this role. Additionally, you will provide guidance and mentoring to less-experienced staff members, solve difficult and complex problems with a fresh perspective, and lead moderate to high complexity projects, delivering professionally written reports, and supporting the realization of operational and strategic plans. The ideal candidate for this position should have a four-year or graduate degree in Mathematics, Statistics, Economics, Computer Science, or a related discipline, or commensurate work experience or demonstrated competence. Typically, candidates should have 4-7 years of work experience, preferably in data analytics, database management, statistical analysis, or a related field, or an advanced degree with 3-5 years of work experience. A preferred certification in Programming Language/s (SQL, Python, or similar) is recommended. Proficiency in Agile Methodology, Business Intelligence, Computer Science, Dashboard, Data Analysis, Data Management, Data Modeling, Data Quality, Data Science, Data Visualization, Data Warehousing, Extract Transform Load (ETL), Machine Learning, Power BI, Python, R, SAS, SQL, Statistics, and Tableau is desired. Additionally, skills in Effective Communication, Results Orientation, Learning Agility, Digital Fluency, and Customer Centricity are valued. This role at HP impacts multiple teams and may act as a team or project leader, providing direction to team activities and facilitating information validation and team decision-making processes. The complexity of the position involves responding to moderately complex issues within established guidelines. HP is a technology company that operates in more than 170 countries worldwide, committed to creating technology that makes life better for everyone, everywhere. The company values diversity, equity, and inclusion, creating a culture where everyone is respected, can be themselves, and contribute to something bigger than themselves. HP celebrates the notion that individuals can belong and bring their authentic selves to work, fostering innovation and growth. Join HP in reimagining and reinventing what's possible in your career and the world around you. Embrace tough challenges, disrupt the status quo, and create what's next alongside a team of talented individuals dedicated to making a meaningful difference. Thrive at HP and be a part of shaping a better future for all.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Genpact is a global professional services and solutions firm committed to delivering outcomes that shape the future. With a workforce of over 125,000 individuals spread across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and dedication to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for candidates to fill the role of Assistant Vice President, Data & AI Solutioning. We are in search of highly skilled and dynamic Solution Leaders to be part of our Data & AI team. As a vital member of our professional services team, you will be instrumental in shaping and delivering innovative solutions that cater to our clients" business requirements. The ideal candidate should possess a profound understanding of data analytics and AI technologies, along with substantial pre-sales experience and a proven track record of driving successful client engagements. Responsibilities: - Collaborate with sales teams to comprehend client requirements and devise customized solutions aligned with their strategic goals. - Lead pre-sales activities, including solution presentations, product demonstrations, and technical workshops, to showcase the value proposition of our offerings. - Translate business requirements into technical specifications and design scalable, high-performance data and analytics solutions. - Work in close collaboration with internal teams, such as delivery, engineering, and product management, to ensure a smooth transition from sales to implementation. - Provide technical expertise and thought leadership to bolster sales efforts and propel revenue growth. - Keep abreast of industry trends and emerging technologies in data and analytics, and incorporate them into solution designs to maintain a competitive edge. - Act as a trusted advisor to clients, offering strategic guidance and recommendations to optimize their data and analytics initiatives. Qualifications: Minimum Qualifications/Skills: - Bachelor's or master's degree in computer science, Engineering, Information Systems, or related field relevant Industry experience. - Big 4 or Global SI experience with a background in working with cross-functional teams is crucial for success in this role. - Relevant years of experience in a pre-sales or solutions architect role, with a focus on data and analytics. - Extensive knowledge of data warehousing, data lakes, business intelligence, and analytics platforms (e.g., Databricks, Snowflake, AWS, GCP, Azure platforms, and visualization tools like Tableau, Power BI, etc.). - Strong understanding of cloud computing concepts and experience with cloud platforms such as AWS, Azure, or Google Cloud. - Demonstrated ability to lead and influence cross-functional teams, including sales, delivery, and engineering. - Excellent communication and presentation skills, with the capacity to explain complex technical concepts to non-technical stakeholders. Preferred Qualifications/Skills: - Demonstrated ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively. - Relevant certifications (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect, etc.) preferred.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest on 3 key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skill sets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines that collect data from disparate sources across the enterprise and external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulate data throughout our data flows, ensuring data is available at each stage in the data flow and in the form needed for each system, service, and customer along said data flow. - Identify and onboard data sources using existing schemas and, where required, conduct exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS Glue or Oracle Cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle Cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g., GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You are invited to join our team in Chennai as a Talend Developer on a contract basis for a duration of 3 months. Your primary responsibility will involve designing, developing, and implementing data integration solutions utilizing Talend Data Integration tools. This position is tailored for individuals who excel in a dynamic, project-oriented setting and possess a solid foundation in ETL development. Your key duties will include crafting and executing scalable ETL processes through Talend Open Studio/Enterprise. You will be tasked with merging data from various sources into target systems while ensuring data quality and coherence. Collaboration with Data Architects, Analysts, and fellow developers will be essential to grasp data requirements and transform them into technical remedies. Moreover, optimizing and fine-tuning ETL jobs for enhanced performance and reliability will be part of your routine tasks. It will also be your responsibility to create and uphold technical documentation related to ETL processes and data workflows, as well as troubleshoot and resolve ETL issues and production bugs. An ideal candidate for this role should possess a minimum of 3 years of hands-on experience with Talend Data Integration. Proficiency in ETL best practices, data modeling, and data warehousing concepts is expected. Additionally, a strong command of SQL and experience working with relational databases such as Oracle, MySQL, and PostgreSQL is essential. Knowledge of Big Data technologies like Hadoop, Spark, and Hive is advantageous, as is familiarity with cloud platforms like AWS, Azure, and GCP. Your problem-solving skills, ability to work independently, and excellent communication and teamwork abilities will be critical to your success in this role. This is a contractual/temporary position that requires your presence at the office for the duration of the 3-month contract term.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

Join Zendesk as a Data Engineering Manager and lead a team of data engineers who deliver meticulously curated data assets to fuel business insights. Collaborate with Product Managers, Data Scientists, and Data Analysts to drive successful implementation of data products. We are seeking a leader with advanced skills in data infrastructure, data warehousing, and data architecture, as well as a proven track record of scaling BI teams. Be a part of our mission to embrace data and analytics and create a meaningful impact within our organization. You will foster the growth and development of a team of data engineers, design, build, and launch new data models and pipelines in production, and act as a player-coach to amplify the effects of your team's work. Foster connections with diverse teams to comprehend data requirements, help develop and support your team in technical architecture, project management, and product knowledge. Define processes for operational excellence in project management and system reliability and set direction for the team to anticipate strategic and scaling-related challenges. Foster a healthy and collaborative culture that embodies our values. What You Bring to the Role: - Bachelor's degree in Computer Science/Engineering or related field. - 7+ years of proven experience in Data Engineering and Data Warehousing. - 3+ years as a manager of data engineering teams. - Proficiency with SQL & any programming language (Python/Ruby). - Experience with Snowflake, BigQuery, Airflow, dbt. - Familiarity with BI Tools (Looker, Tableau) is desirable. - Proficiency in modern data stack and architectural strategies. - Excellent written and oral communication skills. - Proven track record of coaching/mentoring individual contributors and fostering a culture valuing diversity. - Experience leading SDLC and SCRUM/Agile delivery teams. - Experience working with globally distributed teams preferred. Tech Stack: - SQL - Python/Ruby - Snowflake - BigQuery - Airflow - dbt Please note that this position requires physical location in and working from Pune, Maharashtra, India. Zendesk software was built to bring calm to the chaotic world of customer service. We advocate for digital-first customer experiences and strive to create a fulfilling and inclusive workplace experience. Our hybrid working model allows for connection, collaboration, and learning in person at our offices globally, as well as remote work flexibility. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans. If you require an accommodation to participate in the hiring process, please email peopleandplaces@zendesk.com with your specific request.,

Posted 3 days ago

Apply

9.0 - 14.0 years

0 - 0 Lacs

bangalore, bhagalpur, chennai

Remote

Job brief We are seeking an experienced Data Manager to lead the development and utilization of data systems. In this role, you will be responsible for identifying efficient methods to organize, store, and analyze data while maintaining strict security and confidentiality measures. An exceptional Data Manager comprehends the intricacies of data management and possesses a deep understanding of databases and data analysis procedures. You should also possess strong technical acumen and exceptional troubleshooting abilities. Your primary objective will be to ensure the seamless and secure flow of information within and outside the organization, guaranteeing timely access and delivery of data. By implementing effective data management practices, you will contribute to the overall success of our organization. Join our team and be a key driver in optimizing our data systems, unlocking valuable insights, and supporting data-driven decision-making processes. Responsibilities Create and enforce policies for effective data management Formulate techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data Devise and implement efficient and secure procedures for data handling and analysis with attention to all technical aspects Establish rules and procedures for data sharing with upper management, external stakeholders etc. Support others in the daily use of data systems and ensure adherence to legal and company standards Assist with reports and data extraction when needed Monitor and analyze information and data systems and evaluate their performance to discover ways of enhancing them (new technologies, upgrades etc.) Ensure digital databases and archives are protected from security breaches and data losses Troubleshoot data-related problems and authorize maintenance or modifications Requirements and skills Proven experience as data manager Excellent understanding of data administration and management functions (collection, analysis, distribution etc.) Familiarity with modern database and information system technologies Proficient in MS Office (Excel, Access, Word etc.) An analytical mindset with problem-solving skills Excellent communication and collaboration skills BSc/BA in computer science or relevant field

Posted 3 days ago

Apply

3.0 - 6.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Why We Work at Dun & Bradstreet Dun & Bradstreet unlocks the power of data through analytics, creating a better tomorrow Each day, we are finding new ways to strengthen our award-winning culture and accelerate creativity, innovation and growth Our 6,000+ global team members are passionate about what we do We are dedicated to helping clients turn uncertainty into confidence, risk into opportunity and potential into prosperity Bold and diverse thinkers are always welcome Come join us! Learn more at dnb /careers , This role is responsible for improving customer satisfaction and supporting revenue generation by analyzing and controlling data used for products, scoring and analytical models to lead the technical support given to trade partners, Data Operations Analysts and Trade departments globally in solving trade problems and trade related issues Lead data ingestion project to onboard new markets or transform data sources from legacy platforms onto modern cloud environment, Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes, Monitor and troubleshoot data workflows to ensure data quality, availability, and performance, Automate manual processes and develop innovative data tools, Evaluate and implement recent technology solutions, end-to-end process ownership, Communicate with stakeholders Conduct knowledge exchange sessions with technical and non-technical audiences, Build new analytical processes, provide insight to data quality issues and implement data quality improvement processes, Research and implement modern data mastering techniques to increase derived insight on disparate data sources, Support other data engineers with design of ETL processes, code reviews, and knowledge sharing, Develop and maintain data documentation, including data dictionaries, data flow diagrams, and data lineage, Key Skills Minimum of 5 years of experience in data engineering or a related field, Bachelor's degree in computer science, Information Technology, or a related discipline, Strong proficiency in SQL and hands-on experience with at least one programming language such as PHP, Python, or Java, Ability to utilize the network, applications, operating system monitoring and troubleshooting, Take ownership of existing applications for further development/improvements Work closely with related groups to ensure business continuity A self-motivated learner with strong customer focus and with quality Logical with very strong problem-solving skills Strong understanding of data modeling, data warehousing, and database design Experience with hosted environments AWS/Azure/GCP or other cloud service providers Analytical skills and able to perform analysis on code bases to increase performance, Strong team player with excellent listening and communication skills Fluent English written and verbal Results oriented, flexible with an enthusiastic approach Ability to respond quickly to Customer demands and market conditions, All Dun & Bradstreet job postings can be found at https:// dnb / about-us / careers-and-people / joblistings html and https://jobs lever co/dnb Official communication from Dun & Bradstreet will come from an email address ending in @dnb , Notice to Applicants: Please be advised that this job posting page is hosted and powered by Lever Your use of this page is subject to Lever's Privacy Notice and Cookie Policy , which governs the processing of visitor data on this platform, Show

Posted 3 days ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

Gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career, Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express, Join Team Amex and let's lead the way together, From building next-generation apps and microservices in Kotlin to using AI to help protect our franchise and customers from fraud, you could be doing entrepreneurial work that brings our iconic, global brand into the future As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power our digital systems, services, products and platforms If you love to work with APIs, contribute to open source, or use the latest technologies, well support you with an open environment and learning culture Function Description: American Express is looking for energetic, successful and highly skilled Engineers to help shape our technology and product roadmap Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day Today, innovative ideas, insight and new points of view are at the core of how we create a more powerful, personal and fulfilling experience for our customers and colleagues, with batch/real-time analytical solutions using ground-breaking technologies to deliver innovative solutions across multiple business units, This Engineering role is based in our Global Risk and Compliance Technology organization and will have a keen focus on platform modernization, bringing to life the latest technology stacks to support the ongoing needs of the business as well as compliance against global regulatory requirements Qualifications: Support the Compliance and Operations Risk data delivery team in India to lead and assist in the design and actual development of applications, Responsible for specific functional areas within the team, this involves project management and taking business specifications, The individual should be able to independently run projects/tasks delegated to them, Technology Skills: Bachelor degree in Engineering or Computer Science or equivalent 2 to 5 years experience is required GCP professional certification Data Engineer Expert in Google BigQuery tool for data warehousing needs, Experience on Big Data (Spark Core and Hive) preferred Familiar with GCP offerings, experience building data pipelines on GCP a plus Hadoop Architecture, having knowledge on Hadoop, Map Reduce, Hbase, UNIX shell scripting experience is good to have Creative problem solving (Innovative) We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show

Posted 3 days ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Join our digital revolution in NatWest Digital X In everything we do, we work to one aim To make digital experiences which are effortless and secure, So we organise ourselves around three principles: engineer, protect, and operate We engineer simple solutions, we protect our customers, and we operate smarter, Our people work differently depending on their jobs and needs From hybrid working to flexible hours, we have plenty of options that help our people to thrive, This role is based in India and as such all normal working days must be carried out in India, Job Description Join us as a Data Engineer Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll also be responsible for: Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, Well expect you to have experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities, Youll also need: Experience of using programming languages alongside knowledge of data and software engineering fundamentals Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show

Posted 3 days ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Chennai

Work from Office

Join us as a Data Engineer Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, Well expect you to have experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities, Youll Also Need Experience of using programming language such as Python for developing custom operators and sensors in Airflow, improving workflow capabilities and reliability Good knowledge of Kafka and Kinesis for effective real-time data processing, Scala and Spark to enhance data processing efficiency and scalability, Great communication skills with the ability to proactively engage with a range of stakeholders Show

Posted 3 days ago

Apply

8.0 - 13.0 years

13 - 17 Lacs

Noida, Pune, Bengaluru

Work from Office

Position Summary We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership- Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review- Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking- Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship- Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least twoETL toolsamong Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least twodatabases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware oftechniques such asData Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Designing technical architecture Agile PySpark AWS Data Pipeline Data Modelling Matillion Databricks Location - Noida,Bengaluru,Pune,Hyderabad,India

Posted 3 days ago

Apply

7.0 - 12.0 years

12 - 17 Lacs

Noida, Bengaluru

Work from Office

Position Summary This role acts subject matter expert in the Pharma Commercial Datasets-including US data sets and different Business reporting metrics KPI. An expert in Pharma Sales Commercial data who can guide and lead the team supporting pharma clients primarily Data Management and Reporting projects. Job Responsibilities Have had extensive experience in working on Pharma Sales Commercial level datasets (Customer,Sales, Claims, Digital engagements, CRM interactionsetc.), or Medical Affairs, or Managed Markets or Patient Support Services Have fair understanding of functional design architecture preparation along with logical/functional data model of solution Have had good client facing roles to cover business requirement discussions, business use case assessments, conduct requirement workshops/interviews Strong experience in SQL to analyse data sets and perform data profiling/assessment Able to Create Business requirements, define scope and objectives, functional specifications, develop business processes and recommendations related to proposed solution. Obtain sign-off from customers. Facilitate UAT execution phase and work with project manager to obtain user acceptance test signoff. Able to convert business requirements into functional design for development team Represent the practice as a Functional SME for data warehouse, reporting, and master data management projects across Pharma sales commercial data sets. Support Axtrias project and R&D teams with functional/domain knowledge Able to create case studies/success stories of projects with functional/domain view Assist in whitepapers, point-of-view, develop Go-to- market strategy for Sales Commercial domain offerings Able to coach team members in Pharma Sales Commercial domain business & functional use cases Education BE/B.Tech in IT or Computer Master of Computer Application Work Experience Minimum 7 years of Pharma Industry domain experience of performing Business Analyst role in Medium to large Data warehousing and Business Intelligence projects Minimum 5 years of experience of working in Pharma domain with experience of working in Pharma Data sets and client facing roles. Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Client Management Axtria RIGHT ValuesThe leader lives & breathes Axtria RIGHT values- Responds with a sense of urgency, demonstrates integrity, takes initiative, is humble and collaborative. Responsibility/Ownership Technical Competencies Problem Solving Lifescience Knowledge Communication Capability Building / Thought Leadership Business Consulting Business Acumen SQL Subject Matter Expertise

Posted 3 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru

Work from Office

Job Title Associate Data Engineer (Internship Program to Full-time Employee) Job Description For more than 80 years, Kaplan has been a trailblazer in education and professional advancement. We are a global company at the intersection of education and technology, focused on collaboration, innovation, and creativity to deliver a best in class educational experience and make Kaplan a great place to work. Our offices in India opened in Bengaluru in 2018. Since then, our team has fueled growth and innovation across the organization, impacting students worldwide. We are eager to grow and expand with skilled professionals like you who use their talent to build solutions, enable effective learning, and improve students lives. The future of education is here and we are eager to work alongside those who want to make a positive impact and inspire change in the world around them. The Associate Data Engineer at Kaplan North America (KNA) within the Analytics division will work with world class psychometricians, data scientists and business analysts to forever change the face of education. This role is a hands-on technical expert who will help implement an Enterprise Data Warehouse powered by AWS RA3 as a key feature of our Lake House architecture. The perfect candidate possesses strong technical knowledge in data engineering, data observability, Infrastructure automation, data ops methodology, systems architecture, and development. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast-paced environment understanding the business requirements and implementing data & reporting solutions. Above all you should be passionate about working with big data and someone who loves to bring datasets together to answer business questions and drive change Responsibilities You design, implement, and deploy data solutions. You solve difficult problems generating positive feedback. Build different types of data warehousing layers based on specific use cases Lead the design, implementation, and successful delivery of large-scale, critical, or difficult data solutions involving a significant amount of work Build scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective Utilize expertise in SQL and have a strong understanding of ETL and data modeling Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business s analytics and reporting Be proficient in at least one scripting/programming language to handle large volume data processing. 30-day notification period preferred Requirements: In-depth knowledge of the AWS stack (RA3, Redshift, Lambda, Glue, SnS). Experience in data modeling, ETL development and data warehousing. Effective troubleshooting and problem-solving skills Strong customer focus, ownership, urgency and drive. Excellent verbal and written communication skills and the ability to work well in a team Preferred Qualification: Proficiency with Airflow, Tableau & SSRS #LI-NJ1 Location Bangalore, KA, India Additional Locations Employee Type Employee Job Functional Area Systems Administration/Engineering Business Unit 00091 Kaplan Higher ED At Kaplan, we recognize the importance of attracting and retaining top talent to drive our success in a competitive market. Our salary structure and compensation philosophy reflect the value we place on the experience, education, and skills that our employees bring to the organization, taking into consideration labor market trends and total rewards. All positions with Kaplan are paid at least $15 per hour or $31,200 per year for full-time positions. Additionally, certain positions are bonus or commission-eligible. And we have a comprehensive benefits package, learn more about our benefits here . Diversity & Inclusion Statement : Kaplan is committed to cultivating an inclusive workplace that values diversity, promotes equity, and integrates inclusivity into all aspects of our operations. We are an equal opportunity employer and all qualified applicants will receive consideration for employment regardless of age, race, creed, color, national origin, ancestry, marital status, sexual orientation, gender identity or expression, disability, veteran status, nationality, or sex. We believe that diversity strengthens our organization, fuels innovation, and improves our ability to serve our students, customers, and communities. Learn more about our culture here . Kaplan considers qualified applicants for employment even if applicants have an arrest or conviction in their background check records. Kaplan complies with related background check regulations, including but not limited to, the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. There are various positions where certain convictions may disqualify applicants, such as those positions requiring interaction with minors, financial records, or other sensitive and/or confidential information. Kaplan is a drug-free workplace and complies with applicable laws.

Posted 3 days ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Kolkata

Work from Office

Join our Team About this opportunity: Are you a talented IT professional passionate about data and eager to drive the future of AnalyticsEricsson is looking for an IT Data Engineer to design, build, test, and maintain our data and analytics solutions As part of this role, you will work on various technologies and platforms to ensure data is readily available and accessible to all relevant departments You will oversee the actualization of these solutions by confirming our data is extracted, managed, and made available according to Ericsson's high standards and architectural designs, What you will do: Design, build, and maintain scalable and efficient data architectures to support current and future business needs, Ensure data modelling practices align with best practices for maintainability, scalability, and performance, Solve business problems by applying advanced data engineering and complex data transformation techniques on large volumes of data, Stay abreast of the latest technologies and tools in data engineering ( e-g , cloud platforms, databases, ETL tools) and lead the evaluation and integration of new solutions, Identify opportunities to implement emerging technologies ( e-g , machine learning, AI-driven data management) to enhance data capabilities, Develop, optimize, and manage batch and real-time data ingestion pipelines, Work closely with software engineers and data scientists to ensure seamless integration of data pipelines into the larger system architecture, Automate data validation and monitoring processes to ensure data quality and integrity, Foster a collaborative environment focused on continuous learning, growth, and innovation, Manage the day-to-day operations of the team, setting priorities and ensuring timely delivery of data-related projects, The skills you bring: Strong knowledge of database and data warehousing technologies ( e-g , SQL, NoSQL, etc) Strong experience in data engineering or a related field, with a strong focus on designing, developing, and managing large-scale data pipelines and architectures, Proficiency in building ETL/ELT pipelines using python / spark (pyspark), Advanced skills in programming languages such as Python, SQL, Expertise in designing and optimizing data models and complex SQL queries for high-volume systems, Strong problem-solving abilities, with a focus on diagnosing and resolving complex data engineering challenges, Relevant certifications in data engineering, cloud platforms ( e-g , AWS, GCP, Azure), or big data technologies ( e-g , Hadoop, Spark) are a plus, Experience in HANA, Snowflake and other database system, Why join Ericsson At Ericsson, you?ll have an outstanding opportunity The chance to use your skills and imagination to push the boundaries of what?s possible To build solutions never seen before to some of the worlds toughest problems You?ll be challenged, but you wont be alone You?ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next, What happens once you apply Click Here to find all you need to know about what our typical hiring process looks like, Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team Ericsson is proud to be an Equal Opportunity Employer learn more, Primary country and city: India (IN) || Kolkata Req ID: 766494 Show

Posted 3 days ago

Apply

15.0 - 17.0 years

22 - 27 Lacs

Mumbai

Work from Office

Deloitte is looking for Manager | SAP Data Migration | | SAP to join our dynamic team and embark on a rewarding career journey Delegating responsibilities and supervising business operations Hiring, training, motivating and coaching employees as they provide attentive, efficient service to customers, assessing employee performance and providing helpful feedback and training opportunities. Resolving conflicts or complaints from customers and employees. Monitoring store activity and ensuring it is properly provisioned and staffed. Analyzing information and processes and developing more effective or efficient processes and strategies. Establishing and achieving business and profit objectives. Maintaining a clean, tidy business, ensuring that signage and displays are attractive. Generating reports and presenting information to upper-level managers or other parties. Ensuring staff members follow company policies and procedures. Other duties to ensure the overall health and success of the business.

Posted 3 days ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Mumbai

Work from Office

Overview Key Responsibilities- Lead/Architect Solutioning Enterrise level AIML/GenAI/Cloud stack/Dev/ML Ops Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Responsibilities Key Responsibilities Solutioning Enterrise level AIML/GenAI/Cloud stack/Dev/ML Ops Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Qualifications 8-12 years in core AI/ML . Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Essential skills Skill Matrix enterprise sloutioning LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Experience Skill Matrix Enterprise solitions LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript upto 10 years

Posted 3 days ago

Apply

5.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala

Posted 3 days ago

Apply

3.0 - 8.0 years

30 - 35 Lacs

Bengaluru

Work from Office

At Anko you ll be joining a diverse team who come together to collaborate globally around tech. We are an innovation hub which power and support our retail brands. You ll feel the impacts of the work you ll do for our millions of customers and team members every day. Our brands are focused on being customer-led, digitally enabled retailers, providing you with challenging and rewarding work that you will be proud of.Join our team, choose your own path and work on projects that excite you. Job Description: 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates.

Posted 3 days ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr Brighter Futures Start Here At Anko you ll be joining a diverse team who come together to collaborate globally around tech. We are an innovation hub which power and support our retail brands. You ll feel the impacts of the work you ll do for our millions of customers and team members every day. Our brands are focused on being customer-led, digitally enabled retailers, providing you with challenging and rewarding work that you will be proud of.Join our team, choose your own path and work on projects that excite you. Job Description: 3 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:

Posted 3 days ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Senior Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 5-7 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:

Posted 3 days ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Job Type: Fulltime Location: Hyderabad / Work from office Experience: 8+ Years No of positions: 1 Job Description: We are looking for an experienced ETL Lead to manage and deliver enterprise-grade data integration solutions using Azure Data Factory (ADF), SSIS, SQL Querying, Azure SQL, Azure Data Lake, and preferably Azure Databricks. The role includes leading a team, building scalable ETL pipelines, and ensuring data quality and performance through efficient CI/CD practices. Key Responsibilities: Lead a team of engineers and manage ETL project lifecycles. Design, develop, and optimize ETL workflows using ADF and SSIS. Write complex SQL queries and perform performance tuning. Integrate data from varied sources into Azure SQL and Data Lake. Implement CI/CD pipelines for automated deployment and testing. Collaborate with stakeholders to translate business needs into technical solutions. Maintain documentation and enforce best practices. Requirements: 8+ years in ETL development and data integration. Strong expertise in ADF, SSIS, SQL Querying, Azure SQL, Azure Data Lake. Experience with CI/CD tools (e.g., Azure DevOps, Git). Exposure to Azure Databricks is a plus. Solid understanding of data warehousing and data modeling.

Posted 3 days ago

Apply

2.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for ETL Test Engineer to join our dynamic team and embark on a rewarding career journey Responsible for ensuring the accuracy, completeness, and efficiency of the ETL process used to transfer data from one system to another The primary duties of an ETL Test Engineer may include: Developing ETL test cases and test plans that ensure data quality, accuracy, and completeness Conducting functional and non-functional testing of ETL processes to validate the integrity of the data being transferred Identifying and documenting defects, issues, and potential improvements in the ETL process and sharing them with the development team Creating and maintaining ETL test environments that simulate production environments for testing purposes Conducting load testing to measure the scalability and performance of ETL processes under different workloads Conducting regression testing to ensure that changes made to ETL processes do not introduce new defects or issues Developing and maintaining test automation scripts to improve the efficiency of ETL testing To perform the role of an ETL Test Engineer effectively, candidates should possess strong analytical, problem-solving, and communication skills, as well as experience with ETL testing tools and technologies, such as SQL, ETL testing frameworks, and test automation tools

Posted 3 days ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Data Warehouse, BI Testing Experience (Years): 4-6 Essential Skills: Must have 3-7 years of experience in data warehouse and etl testing, preferable with ETL tool - Must be proficient in writing sql queries - Good to have knowledge of Agile methodology, SCRUM, Jira, confluenceGood testing and analytical skillGood writing and communication skillsGood understanding and experience in projectsProficient in SQL QueriesProficient in ETL TestingMinimum 8 years of test experience is preferable

Posted 3 days ago

Apply

5.0 - 10.0 years

11 - 13 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

We are looking for a highly skilled and experienced Senior Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and optimizing data pipelines and data architecture, as well as experience with Azure cloud services. You will work closely with cross-functional teams to ensure data is accessible, reliable, and ready for analytics and business insights. Mandatory Skills Advanced SQL, Python and PySpark for data engineering Azure 1st party services (ADF, Azure Databricks, Synapse, etc.) Data warehousing (Redshift, Snowflake, Big Query) Workflow orchestration tools (Airflow, Prefect, or similar) Experience with DBT (Data Build Tool) for transforming data in the warehouse Hands-on experience with real-time/live data processing frameworks such as Apache Kafka, Apache Flink, or Azure Event Hubs Key Responsibilities Design, develop, and maintain scalable and reliable data pipelines Demonstrate experience and leadership across two full project cycles using Azure Data Factory, Azure Databricks, and PySpark Collaborate with data analysts, scientists, and software engineers to understand data needs Design and build scalable data pipelines using batch and real-time streaming architectures Implement DBT models to transform, test, and document data pipelines Implement data quality checks and monitoring systems Optimize data delivery and processing across a wide range of sources and formats Ensure security and governance policies are followed in all data handling processes Evaluate and recommend tools and technologies to improve data engineering capabilitie Lead and mentor junior data engineers as needed Work with cross-functional teams in a dynamic and fast-paced environment Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Certifications in Databricks Professional are preferred Technical Skills Programming: Python, PySpark, SQL ETL tools and orchestration (e.g., Airflow, DBT), Cloud platforms (Azure) Real-time streaming tools: Kafka, Flink, Spark Streaming, Azure Event Hubs Data Warehousing: Snowflake, Big Query, Redshift Cloud: Azure (ADF, Azure Databricks) Orchestration: Apache Airflow, Prefect, Luigi Databases: PostgreSQL, MySQL, NoSQL (MongoDB, Cassandra) Tools: Git, Docker, Kubernetes (basic), CI/CD Soft Skills Strong problem-solving and analytical thinking Excellent verbal and written communication Ability to manage multiple tasks and deadlines Collaborative mindset with a proactive attitude Strong analytical skills related to working with unstructured datasets Good to Have Experience with real-time data processing (Kafka, Flink) Knowledge of data governance and privacy regulations (GDPR, HIPAA) Familiarity with ML model data pipeline integration Work Experience Minimum 5 years of relevant experience in data engineering roles Experience with Azure 1st party services across at least two full project lifecycles Compensation & Benefits Competitive salary and annual performance-based bonuses Comprehensive health and optional Parental insurance. Optional retirement savings plans and tax savings plans. Key Result Areas (KRAs) Timely development and delivery of high-quality data pipelines Implementation of scalable data architectures Collaboration with cross-functional teams for data initiatives Compliance with data security and governance standards Key Performance Indicators (KPIs) Uptime and performance of data pipelines Reduction in data processing time Number of critical bugs post-deployment Stakeholder satisfaction scores Successful data integrations and migrations

Posted 3 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies