Home
Jobs

53 Relational Sql Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 2 months ago

Apply

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 2 months ago

Apply

5 - 9 years

20 - 25 Lacs

Pune

Work from Office

Naukri logo

Primary Responsibilities Provide engineering leadership, mentorship, technical direction to small team of other engineers (~6 members). Partner with your Engineering Manager to ensure engineering tasks understood, broken down and implemented to the highest of quality standards. Collaborate with members of the team to solve challenging engineering tasks on time and with high quality. Engage in code reviews and training of team members. Support continuous deployment pipeline code. Situationally troubleshoot production issues alongside the support team. Continually research and recommend product improvements. Create and integrate features for our enterprise software solution using the latest Python technologies. Assist and adhere to enforcement of project deadlines and schedules. Evaluate, recommend, and proposed solutions to existing systems. Actively communicate with team members to clarify requirements and overcome obstacles to meet the team goals. Leverage open-source and other technologies and languages outside of the Python platform. Develop cutting-edge solutions to maximize the performance, scalability, and distributed processing capabilities of the system. Provide troubleshooting and root cause analysis for production issues that are escalated to the engineering team. Work with development teams in an agile context as it relates to software development, including Kanban, automated unit testing, test fixtures, and pair programming. Requirement of 4-8or more years experience as a Python developer on enterprise projects using Python, Flask, FastAPI, Django, PyTest, Celery and other Python frameworks. Software development experience including: object-oriented programming, concurrency programming, modern design patterns, RESTful service implementation, micro-service architecture, test-driven development, and acceptance testing. Familiarity with tools used to automate the deployment of an enterprise software solution to the cloud, Terraform, GitHub Actions, Concourse, Ansible, etc. Proficiency with Git as a version control system Experience with Docker and Kubernetes Experience with relational SQL and NoSQL databases, including MongoDB and MSSQL. Experience with object-oriented languages: Python, Java, Scala, C#, etc. Experience with testing tools such as PyTest, Wiremock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as BigQuery, GKE, GCS, DataFlow, Kubeflow, and/or VertexAI Excellent problem solving and communication skills. Experience with Java and Spring a big plus. UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation: For individuals with disabilities that need additional assistance at any point in the application and interview process, please email .

Posted 2 months ago

Apply

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 2 months ago

Apply

7 - 12 years

22 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Overview Work closely with Data Analyst, Architects to achieve DQ objectives Validating data pipelines and ensuring the quality of datasets Collaborate with the Data Delivery technology teams and architects to define and develop solutions Gather business requirements for the building of the Data Quality Dashboard that will include the full set of functionalities from exception identification to remediation Formulate and actively develop test cases, test scripts, and output report specifications for data remediation projects Maintain up-to-date documentation for the Data quality audits and processes Manage and provide data quality control and perform deep-dive analysis of data, identifies problems, root cause, and solution Able to integrate data QA automation using DQ tools Should be able to research new tools and technologies for project. Come up with innovations which team can get benefit from Mentor QA team and Provide solutions to team issues Troubleshooting skills to optimize performance. Leads a team of 8 data QA and own and prioritizes work allocation based on business need Provides regular coaching and feedback to direct reports Responsibilities 7+ Years of Experience in Data Test Automation, Data Comparison and Validation. Must have experience in ETL/ELT tools and pipeline Working experience with Python libraries such as Pandas, NumPy, and SQL Alchemy for ETL Strong understanding of Data Warehouse/Data Lake Architecture and Development Experience with relational SQL and NoSQL databases Good knowledge of SQL, DB Procedures, Packages, and Functions. Experience with data pipeline and workflow management tools Experience working with relational databases Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience in SQL injection, and query performance (Athena). Must have CI/Cd knowledge Knowledge of bit bucket/git Excellent interpersonal and communication skills Qualifications Additional Skills Be tool agnostic and recommend new processes and techniques to improve the capability of testing for our clients. Good analytical, Problem-solving skills and communication skills. Leadership/mentorship experience Demonstrate motivation, ownership, compassion, and leadership capabilities within the Team Able to communicate feedback articulately Has analytical way of thinking NICE TO HAVE Understanding of data observability tools like Great expectation, Light Up etc. Understanding of Cloud Technologies in AWS or Azure Experience in advanced Excel Experience in data integration tools like Power BI, Datadog, Altan etc. Have team management experience Knowledge of Atlassian products like Jira & Confluence Supervisory Responsibilities Maximum 5 QA employees

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Chennai

Work from Office

Naukri logo

Bounteous x Accolite is a premier end-to-end digital transformation consultancy dedicated to partnering with ambitious brands to create digital solutions for todays complex challenges and tomorrows opportunities. With uncompromising standards for technical and domain expertise, we deliver innovative and strategic solutions in Strategy, Analytics, Digital Engineering, Cloud, Data & AI, Experience Design, and Marketing. Our Co-Innovation methodology is a unique engagement model designed to align interests and accelerate value creation. Our clients worldwide benefit from the skills and expertise of over 4,000+ expert team members across the Americas, APAC, and EMEA. By partnering with leading technology providers, we craft transformative digital experiences that enhance customer engagement and drive business success. About Bounteous ( https://www.bounteous.com/ ). Founded in 2003 in Chicago, Bounteous is a leading digital experience consultancy that co-innovates with the world's most ambitious brands to create transformative digital experiences. With services in Strategy, Experience Design, Technology, Analytics and Insight, and Marketing, Bounteous elevates brand experiences through technology partnerships and drives superior client outcomes. For more information, please visit www.bounteous.com. Information Security Responsibilities. Promote and enforce awareness of key information security practices, including acceptable use of information assets, malware protection, and password security protocols. Identify, assess, and report security risks, focusing on how these risks impact the confidentiality, integrity, and availability of information assets. Understand and evaluate how data is stored, processed, or transmitted, ensuring compliance with data privacy and protection standards (GDPR, CCPA, etc.). Ensure data protection measures are integrated throughout the information lifecycle to safeguard sensitive information. Preferred Qualifications. 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Working knowledge of ETL technology Talend / Apache Ni-fi / AWS Glue. Experience with relational SQL and NoSQL databases. Experience with big data toolsHadoop, Spark, Kafka, etc. (Nice to have). Advanced Alteryx Designer (Mandatory at this point relaxing that would be tough). Tableau Dashboarding. AWS (familiarity with Lambda, EC2, AMI). Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc. (Nice to have). Experience with cloud servicesEMR, RDS, Redshift or Snowflake. Experience with stream-processing systemsStorm, Spark-Streaming, etc.(Nice to have). Experience with object-oriented/object function scripting languagesPython, Java, Scala, etc. Responsibilities. Work with Project Managers, Senior Architects and other team members from Bounteous & Client teams to evaluate data systems and project requirements. In cooperation with platform developers, develop scalable and fault-tolerant Extract Transform Load (ETL) and integration systems for various data platforms which can operate at appropriate scale; meeting security, logging, fault tolerance and alerting requirements. Work on Data Migration Projects. Effectively communicate data requirements of various data platforms to team members. Evaluate and document existing data ecosystems and platform capabilities. Configure CI/CD pipelines. Implement proposed architecture and assist in infrastructure setup. We invite you to stay connected with us by subscribing to our monthly job openings alert here . Research shows that women and other underrepresented groups apply only if they meet 100% of the criteria of a job posting. If you have passion and intelligence, and possess a technical knack (even if youre missing some of the above), we encourage you to apply. Bounteous x Accolite is focused on promoting an inclusive environment and is proud to be an equal opportunity employer. We celebrate the different viewpoints and experiences our diverse group of team members bring to Bounteous x Accolite. Bounteous x Accolite does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, physical or mental disability, national origin, veteran status, or any other status protected under federal, state, or local law. In addition, you have the opportunity to participate in several Team Member Networks, sometimes referred to as employee resource groups (ERGs), that host space with individuals with shared identities, interests, and passions. Our Team Member Networks celebrate communities of color, life as a working parent or caregiver, the 2SLGBTQIA+ community, wellbeing, and more. Regardless of your respective identity, there are various avenues we involve team members in the Bounteous x Accolite community. Bounteous x Accolite is willing to sponsor eligible candidates for employment visas

Posted 2 months ago

Apply

4 - 6 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Analyse and organize raw data. Build data systems and pipelines. Evaluate business needs and objectives. Interpret trends and patterns. Conduct complex data analysis and report on results. Prepare data for prescriptive and predictive modelling. Build algorithms and prototypes. Combine raw information from different sources. Explore ways to enhance data quality and reliability. Identify opportunities for data acquisition. Develop analytical tools and programs. Collaborate with data scientists and architects on several projects. Participate in code peer reviews to ensure our applications comply with best practices. Your Profile Experience with any Big Data toolsHadoop, Spark, Kafka, Sqoop, Flume, Hive etc. Experience with any relational SQL and NoSQL databases, including Postgres, Cassandra, Sql Server, Oracle, Snowflake. Experience with any data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc. Experience in any Cloud platformsAzure, AWS or GCP. Experience with stream-processing systemsStorm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languagesPython, Java, C++, Scala, etc Must have hands-on experience in DevOps and CI/CD deployments. Should know basic and advance SQL and can write complex queries. Strong experience into data warehousing and dimensional modelling. Should be a very good team player to work in a geographically dispersed team . What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 2 months ago

Apply

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

4 - 9 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : About The Role :: The Intel Foundry Manufacturing and Supply chain FMSC Automation team is looking for a highly motivated Big Data Engineer with strong data engineering skills for data integration of various manufacturing data. You will be responsible for engaging with customers and driving development from ideation to deployment and beyond. This position is a technical role that requires the direct design and development of robust, scalable, performant systems for world-class manufacturing data engineering. Responsibilities include: Create and maintain optimal data architectureAssemble large, complex data sets that meet functional and non-functional business requirementsIdentify, design, and implement internal process improvements:automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sourcesWork with stakeholders including the users, cross functional teams to assist with data-related technical issues and support their data infrastructure needs.Standard process to keep data secure with right access and authorizationFocus on automated testing and robust monitoringThe ideal candidate must exhibit the following behavioral traits:Excellent problem solving and interpersonal communication skillsStrong desire to learn and share knowledge with others.Be inquisitive, innovative, and a team player with a strong focus on quality workmanship.Troubleshooting skills and root cause analysis for performance issuesAbility to lean, adopt and implement new skills to drive innovation and excellence.Ability to work with cross functional teams in dynamic environment Qualifications Minimum Qualifications: A bachelor's with 4+ years of experience in related field Experience building and optimizing big data pipelines Experience with skills pf handling unstructured data Experience with data transformations, structures, metadata, workload management Experience with big data tools:Spark, Kafka, NIFI, etc. Experience with at least programming languages:Python, C#, .NET Experience with relational SQL and NOSQL DBs Experience in leveraging open-source packages Experience in cloud native skills such as Docker, Kubernetes, Rancher etc. Good to have skills: Experience with semiconductor manufacturingExperience of data engineering on cloudExperience in developing AI/ML Solutions Inside this Business Group As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moore's Law to bring smart, connected devices to every person on Earth.

Posted 3 months ago

Apply

4 - 6 years

6 - 8 Lacs

Mumbai

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Analyse and organize raw data. Build data systems and pipelines. Evaluate business needs and objectives. Interpret trends and patterns. Conduct complex data analysis and report on results. Prepare data for prescriptive and predictive modelling. Build algorithms and prototypes. Combine raw information from different sources. Explore ways to enhance data quality and reliability. Identify opportunities for data acquisition. Develop analytical tools and programs. Collaborate with data scientists and architects on several projects. Participate in code peer reviews to ensure our applications comply with best practices. Your Profile Experience with any Big Data tools:Hadoop, Spark, Kafka, Sqoop, Flume, Hive etc. Experience with any relational SQL and NoSQL databases, including Postgres, Cassandra, Sql Server, Oracle, Snowflake. Experience with any data pipeline and workflow management tools:Azkaban, Luigi, Airflow, etc. Experience in any Cloud platforms:Azure, AWS or GCP. Experience with stream-processing systems:Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages:Python, Java, C++, Scala, etc Must have hands-on experience in DevOps and CI/CD deployments. Should know basic and advance SQL and can write complex queries. Strong experience into data warehousing and dimensional modelling. Should be a very good team player to work in a geographically dispersed team . What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 months ago

Apply

3 - 7 years

25 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Help shape the future of mobility. Would you like to join our exciting journey and change the automotive industry? Aptiv is one of the leading Automotive suppliers and the forefront of solving mobility’s toughest challenges. As a large technology company, we are looking for a new talent for one of our leading Tech Centers for Artificial Intelligence in Bangalore, India. We offer the chance to work in a challenging technical environment where science is transferred into real products. There, you can work together with a fantastic, passionate young, international team of technical experts from around the globe to develop new sensors, algorithms and platforms to shape the future of mobility. Want to join us? Your Role Work closely with Architects and Developers to concept and design auto scaling solutions across the world. Be part of Development and Operations and help building and enhancing a new groundbreaking CI/CD platform hosted in multiple clouds. Script, configure, and create state of the art solutions in a scalable hybrid cloud environment. Connect and deploy innovative solutions to a game changing CI platform. Participate in technical discussions with our Agile team. Ensure that all applicable data privacy requirements are met. It’s natural for you to apply and consider cost optimization while working in a cloud environment. Follow coding standards and guidelines in SW development process, Debugging, Troubleshooting and fixing bugs. Your Background Bachelor’s(BE) / Masters(MS) degree in a technical discipline (engineering, computer science, mathematics, physics, or related field of study). #EXP[3-7] Skills – Linux, typescript , node js, angular, mongo db (No Sql), Azure, AWS, microservices, CI/CD. 3+ years of experience writing software using scripting languages Javascript/Typescript and/or Python, preferably with Linux. Experience with cloud infrastructure such as Azure(Preferred), AWS, Google Cloud. Experience with NoSQL database like MongoDB. Experience with relational SQL and NoSQL databases. Experience building data analysis and visualization dashboards using tools such as Qlik Sense, Grafana/Kibana and ELK stack. Experience with RUST is a Bonus, Hands on experience with Typescript development on server-side. Why join us? You can grow at Aptiv. Whether you are working towards a promotion, stepping into leadership, considering a lateral career move, or simply expanding your network – you can do it here. Aptiv provides an inclusive work environment where all individuals can grow and develop, regardless of gender, ethnicity or beliefs. You can have an impact. Safety is a core Aptiv value; we want a safer world for us and our children, one with: Zero fatalities, Zero injuries, Zero accidents. You have support. Our team is our most valuable asset. We ensure you have the resources and support you need to take care of your family and your physical and mental health with a competitive health insurance package. Your Benefits at Aptiv: Higher Education Opportunities (UDACITY, UDEMY, COURSERA are available for your continuous growth and development); Life and accident insurance; Well Being Program that includes regular workshops and networking events; Access to fitness clubs (T&C apply); Apply today, and together let’s change tomorrow! #LI-SR1

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

4 - 8 years

6 - 10 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What youll be doing... You will be responsible for migrating our feature set preparation code from BTEQ scripts to hive compatible hql or spark in an optimized way. You will incorporate possible automation techniques to hasten this migration project. You will make sure migration is delivered in phases on time with high code quality and standards followed. Migrating bteq scripts to hql or spark. Documenting the complete migration end to end in VZ Grid. Identifying, designing, and implementing internal process improvements by automating manual processes, optimizing data delivery and re-designing infrastructure for greater scalability, etc. Building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies. Working with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and supporting their data infrastructure needs. Rewriting our big data ETL pipeline in BTEQ to hql/spark, to create datasets for our modeling efforts. Wrangling with raw data from large, diverse data sets from our distribution partners. Mentoring team members on need basis. What were looking for... You are excited to work in a cloud environment, supporting development and deployment in the Verizon Grid. You are self-directed and comfortable supporting the data needs of multiple teams, systems and products. You are excited by the prospect of optimizing or even re-designing the architecture to support our next generation of products and dataset creation for modelling purposes. You'll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Experience with big data tools: Hadoop Eco System ( Hive, Pig, OOZie, Spark, Kafka, Elastic search, Kibana). Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working with a variety of databases. Experience in scripting languages: Unxi shell scripts, Python or Scala. Even better if you have one or more of the following: Masters degree. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: NIFI. Experience with stream-processing systems: Spark-Streaming, Storm etc. Experience transforming complex data into easily understandable and actionable information. Experience working in a fast-paced environment. Ability to quickly adapt to changing priorities.

Posted 3 months ago

Apply

8 - 12 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. About The Role : Having 5+ years of experience in creating data strategy frameworks/ roadmaps Having relevant experience in data exploration & profiling, involve in data literacy activities for all stakeholders. 5+ years in Analytics and data maturity evaluation based on current AS-is vs to-be framework. 5+ years Relevant experience in creating functional requirements document, Enterprise to-be data architecture. Relevant experience in identifying and prioritizing use case by for business; important KPI identification opex/capex for CXO's 2+ years working knowledge in Data Strategy:Data Governance/ MDM etc 4+ year experience in Data Analytics operating model with vision on prescriptive, descriptive, predictive, cognitive analytics Primary Skills 8+ years of experience in a Data Strategy role, who has attained a Graduate degree in Computer Science, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with understanding big data tools:Hadoop, Spark, Kafka, etc. Experience with understanding relational SQL and NoSQL databases, including Postgres and Cassandra/Mongo dB. Experience with understanding data pipeline and workflow management tools:Luigi, Airflow, etc. Good to have cloud skillsets (Azure/ AWS/ GCP), 5+ years of Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.:Postgres/ SQL/ Mongo 2+ years working knowledge in Data Strategy:Data Governance/ MDM etc.

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

3 - 5 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

3 - 5 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud Experience in Informatica PowerCenter Experience in Unix shell scripting and python

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Navi Mumbai

Work from Office

Naukri logo

Responsibilities Hiring manager and Recruiter should collaborate to create the relevant verbiage. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Your primary responsibilities include: Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues. Preferred technical and professional experience Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc

Posted 3 months ago

Apply

2 - 5 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. Your primary responsibilities include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Develop & maintain data pipelines for batch & stream processing using informatica power centre or cloud ETL/ELT tools. Liaise with business team and technical leads, gather requirements, identify data sources, identify data quality issues, design target data structures, develop pipelines and data processing routines, perform unit testing and support UAT. Work with data scientist and business analytics team to assist in data ingestion and data-related technical issues Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in Data warehousing/ information Management/ Data Integration/Business Intelligence using ETL tool Informatica PowerCenter Knowledge of Cloud, Power BI, Data migration on cloud skills. End to end knowledge of existing HDFC Bank EDW/ SOR Experience in Unix shell scripting and python Experience with relational SQL, Big Data etc Preferred technical and professional experience Knowledge of MS-Azure Cloud End to end knowledge of existing HDFC Bank EDW/ SOR Experience in Informatica PowerCenter, Unix shell scripting and python

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies