Home
Jobs
Companies
Resume

5267 Pyspark Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are intending to hire Data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Work Location: Chennai or Hyderabad or Pune WFO. Shift hours: 2.00pm to 11.00pm IST. Required Immediate Joiners. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. * Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: • Designs, implements and maintains reliable and scalable data infrastructure • Writes, deploys and maintains software to build, integrate, manage, maintain , and quality-assure data •Develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud • Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes • Works with customers to deploy, manage, and audit standard processes for cloud products • Adheres to and advocates for software & data engineering standard processes ( e.g. Data Engineering pipelines, unit testing, monitoring, alerting, source control, code review & documentation) • Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline • Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain , responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. • Part of a cross-disciplinary team working closely with other data engineers, Architects, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Mandatory skill sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark , spark-SQL, Preferred skill sets: ‘Good to have’ knowledge, skills and experiences C osmos DB, Data modeling, Databricks, PowerBI , experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above ) Expected Joining: 3 weeks Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. * Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: • Designs, implements and maintains reliable and scalable data infrastructure • Writes, deploys and maintains software to build, integrate, manage, maintain , and quality-assure data •Develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud • Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes • Works with customers to deploy, manage, and audit standard processes for cloud products • Adheres to and advocates for software & data engineering standard processes ( e.g. Data Engineering pipelines, unit testing, monitoring, alerting, source control, code review & documentation) • Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline • Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain , responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. • Part of a cross-disciplinary team working closely with other data engineers, Architects, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Mandatory skill sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark , spark-SQL, Preferred skill sets: ‘Good to have’ knowledge, skills and experiences C osmos DB, Data modeling, Databricks, PowerBI , experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above ) Expected Joining: 3 weeks Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. * Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: • Designs, implements and maintains reliable and scalable data infrastructure • Writes, deploys and maintains software to build, integrate, manage, maintain , and quality-assure data •Develops, and delivers large-scale data ingestion, data processing, and data transformation projects on the Azure cloud • Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes • Works with customers to deploy, manage, and audit standard processes for cloud products • Adheres to and advocates for software & data engineering standard processes ( e.g. Data Engineering pipelines, unit testing, monitoring, alerting, source control, code review & documentation) • Deploys secure and well-tested software that meets privacy and compliance requirements; develops, maintains and improves CI / CD pipeline • Service reliability and following site-reliability engineering standard processes: on-call rotations for services they maintain , responsible for defining and maintaining SLAs. Designs, builds, deploys and maintains infrastructure as code. Containerizes server deployments. • Part of a cross-disciplinary team working closely with other data engineers, Architects, software engineers, data scientists, data managers and business partners in a Scrum/Agile setup Mandatory skill sets: ‘Must have’ knowledge, skills and experiences Synapse, ADF, spark, SQL, pyspark , spark-SQL, Preferred skill sets: ‘Good to have’ knowledge, skills and experiences C osmos DB, Data modeling, Databricks, PowerBI , experience of having built analytics solution with SAP as data source for ingestion pipelines. Depth: Candidate should have in-depth hands-on experience w.r.t end to end solution designing in Azure data lake, ADF pipeline development and debugging, various file formats, Synapse and Databricks with excellent coding skills in PySpark and SQL with logic building capabilities. He/she should have sound knowledge of optimizing workloads. Years of experience required : 6 to 9 years relevant experience Education qualification: BE, B.Tech , ME, M,Tech , MBA, MCA (60% above ) Expected Joining: 3 weeks Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Work Location: Trivandrum / Kochi (Remote or Onsite) Job Type: Full-time Experience Level: Senior Level Industry: IT Services | Cloud & Data Engineering Shift: US Overlapping Hours Notice Period: Immediate joiners only Preferred Candidates: Based in Kerala, Tamil Nadu, or Karnataka About the Role We are seeking a highly skilled Senior Data Engineer to lead the development of robust data ingestion frameworks , enforce data quality , and create high-performance APIs using a modern AWS-based tech stack. This role requires a minimum of 10 years of experience , including 5+ years specifically working on AWS platforms , and demands strong hands-on skills in Python, PySpark, AWS Glue, Lambda, CI/CD, DynamoDB, and EMR . This is a high-impact engineering position for professionals who can thrive in a fast-paced, cloud-first environment , working in US overlapping hours . Key Responsibilities Data Ingestion & Engineering Architect and develop scalable ETL/ELT pipelines using AWS Glue, Lambda, EMR, and Step Functions. Integrate diverse data sources into secure and optimized ingestion frameworks. Data Quality & Monitoring Implement automated data validation, error handling, and quality checks to ensure integrity. Set up end-to-end monitoring, logging , and alerting systems for real-time issue resolution. API Development Design, build, and document secure, high-performance RESTful APIs for seamless integrations. Collaboration & Agile Development Work cross-functionally with stakeholders, data scientists, and DevOps teams. Actively participate in Agile ceremonies , code reviews, and CI/CD pipeline management (GitLab preferred). Mandatory Skills & Experience Total Experience: 10+ years in Data Engineering roles AWS Expertise: Minimum 5 years of experience with AWS services Core Skills: Python, PySpark AWS Glue, Lambda, EMR CI/CD pipelines (GitLab or equivalent) DynamoDB, S3, Step Functions Strong understanding of data quality frameworks and API-driven architectures Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field Familiarity with data lakehouse architectures Exposure to Kinesis, Firehose, SQS is a plus Additional Information Work Location: Trivandrum / Kochi (Hybrid or Onsite) Shift: US Overlapping Hours Notice Period: Immediate Joiners Only Preferred Location: Kerala, Tamil Nadu, Karnataka Employment Type: Full-Time Ready to build scalable, cloud-native data platforms that power real-time insights? Apply now and become part of a dynamic, high-impact engineering team. Show more Show less

Posted 1 day ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Quality Automation Specialist - PySpark In this key role, you’ll be undertaking and enabling automated testing activities in all delivery models We’ll look to you to support teams to develop quality solutions and enable continuous integration and assurance of defect free deployment of customer value You’ll be working closely with feature teams and a variety of stakeholders, giving you great exposure to professional development opportunities We're offering this role at associate vice president level What you'll do Joining us in a highly collaborative role, you’ll be contributing to the transformation of testing using quality processes, tools, and methodologies, significantly improving control, accuracy and integrity. You’ll implement testing techniques in the migration of existing liquidity application on AWS cloud which has been rewritten in PySpark. It’s a chance to work with colleagues at multiple levels, and with cross-domain, domain, platform and feature teams, to build in quality as an integral part of all activities. Additionally, You’ll Be Supporting the design of automation test strategies, aligned to business or programme goals Evolving more predictive and intelligent testing approaches, based on automation and innovative testing products and solutions Collaborating with stakeholders and feature teams and making sure that automated testing is performed and monitored as an essential part of the planning and product delivery Designing and creating a low maintenance suite of stable, re-usable automated tests, which are usable both within the product or domain and across domains and systems in an end-to-end capacity Applying testing and delivery standards by understanding the product development lifecycle along with mandatory, regulatory and compliance requirements The skills you'll need We’re looking for someone with experience of automated testing, particularly from an Agile development or CI/CD environment. You’ll be an innovative thinker who can identify opportunities and design solutions, coupled with the ability to develop complex automation code. You’ll need at least nine years of experience with testing. You'll also need experience with PySpark and AWS while performing automation testing. We’ll Also Look For You To Have Experience in end-to-end and automation testing using the latest tools as recommended by enterprise tooling framework A background of designing, developing and implementing automation frameworks in new environments Excellent communication skills with the ability to communicate complex technical concepts to management level colleagues Good collaboration and stakeholder management skills Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Engineer Intern – Xiaomi India Location: Bangalore, India Duration: 6 Months Internship Eligibility: College Passed out students (B.Tech/M.Tech in CS, IT or other related fields) Xiaomi is one of the world’s leading technology companies, with a strong presence in India across smartphones, smart devices, and internet services. At Xiaomi India, data is at the core of all strategic decisions. We’re looking for passionate Data Engineer Interns to work on high-impact projects involving large-scale data systems, data modeling, and pipeline engineering to support business intelligence, analytics, and AI use cases. Key Responsibilities Assist in building scalable data pipelines using Python and SQL. Support data modeling activities for analytics and reporting use cases. Perform data cleansing, transformation, and validation using PySpark. Collaborate with data engineers and analysts to ensure high data quality and availability. Work on Hadoop ecosystem tools to process large datasets. Contribute to data documentation and maintain version-controlled scripts Technical Skills Required Strong proficiency in Python for data processing and scripting. Good knowledge of SQL – writing complex queries, joins, aggregations Understanding of Data Modeling concepts – Star/Snowflake schema, Fact/Dimension tables. Familiarity with Big Data / Hadoop ecosystem – HDFS, Hive, Spark. Basic exposure to PySpark will be a strong plus. Experience with tools like Jupyter Notebook, VS Code, or any modern IDE. Exposure to cloud platforms (AWS/Azure/GCP/Databricks) is a bonus. Soft Skills Eagerness to learn and work in a fast-paced data-driven environment. Strong analytical thinking and attention to detail. Good communication and collaboration skills. Self-starter with the ability to work independently and in teams. Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Preferred Education Master's Degree Required Technical And Professional Expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred Technical And Professional Experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration Career Level - IC3 Responsibilities Job Description As a Technical Lead, you will be working on both offshore and onsite client projects. You will be working in projects which will involve Oracle BI Applications/ FAW or OBIEE/OAC/ ODI Implementation, You will be Interacting with client to understand and gather requirement You will be responsible for technical design, development, and system/integration testing using oracle methodologies Desired Profile End –to-end ODI, OAC and Oracle BI Applications/ FAW implementation experience Expert knowledge of BI Applications/ FAW including basic and advanced configurations with Oracle eBS suite/ Fusion as the source system Expert knowledge of OBIEE/OAC RPD design and reports design Expert knowledge ETL(ODI) design/ OCI DI/ OCI Dataflow Mandatory to have 1 of these skills : PLSQL/ BI Publisher/BI Apps Good to have EDQ, Pyspark skills Architectural Solution Definition Any Industry Standard Certifications will be a plus Good knowledge in Oracle database and development Experience in the database application. Creativity, Personal Drive, Influencing and Negotiating, Problem Solving Building Effective Relationships, Customer Focus, Effective Communication, Coaching Ready to travel as and when required by project Experience 8-12 yrs of Data warehousing and Business Intelligence project experience Having 4-6 years of project experience on BI Applications/ FAW and OBIEE/OAC/ ODI/ OCI DI with at least 2 complete lifecycle implementations 4-6 yrs of specialized BI Applications and OBIEE/OAC/ ODI/ OCI DI customization and solution architecture experience. Worked on Financial, SCM or HR Analytics recently in implementation and configuration About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your Primary Responsibilities Include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Experience with Apache Spark (PySpark): In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Engineering Business Unit Overview The charter for Engineering group at Oportun is to be the world-class engineering force behind our innovative products. The group plays a vital role in designing, developing, and maintaining cutting-edge software solutions that power our mission and advance) our business. We strike a balance between leveraging leading tools and developing in-house solutions to create member experiences that empower their financial independence. The talented engineers in this group are dedicated to delivering and maintaining performant, elegant, and intuitive systems to our business partners and retail members. Our platform combines service-oriented platform features with sophisticated user experience and is enabled through a best-in-class (and fun to use!) automated development infrastructure. We prove that FinTech is more fun, more challenging, and in our case, more rewarding as we build technology that changes our members’ lives. Engineering at Oportun is responsible for high quality and scalable technical execution to achieve business goals and product vision. They ensure business continuity to members by effectively managing systems and services - overseeing technical architectures and system health. In addition, they are responsible for identifying and executing on the technical roadmap that enables product vision as well as fosters member & business growth in a scalable and efficient manner. The Enterprise Data and Technology (EDT) pillar within the Engineering Business Unit focusses on enabling wide use of corporate data assets whilst ensuring quality, availability and security across the data landscape. Position Overview As a Senior Data Engineer at Oportun, you will be a key member of our EDT team, responsible for designing, developing, and maintaining sophisticated software / data platforms in achieving the charter of the engineering group. Your mastery of a technical domain enables you to take up business problems and solve them with a technical solution. With your depth of expertise and leadership abilities, you will actively contribute to architectural decisions, mentor junior engineers, and collaborate closely with cross-functional teams to deliver high-quality, scalable software solutions that advance our impact in the market. This is a role where you will have the opportunity to take up responsibility in leading the technology effort – from technical requirements gathering to final successful delivery of the product - for large initiatives (cross functional and multi-month long projects). Responsibilities Data Architecture and Design: Lead the design and implementation of scalable, efficient, and robust data architectures to meet business needs and analytical requirements. Collaborate with stakeholders to understand data requirements, build subject matter expertise and define optimal data models and structures. Data Pipeline Development and Optimization: Design and develop data pipelines, ETL processes, and data integration solutions for ingesting, processing, and transforming large volumes of structured and unstructured data. Optimize data pipelines for performance, reliability, and scalability. Database Management and Optimization: Oversee the management and maintenance of databases, data warehouses, and data lakes to ensure high performance, data integrity, and security. Implement and manage ETL processes for efficient data loading and retrieval. Data Quality and Governance: Establish and enforce data quality standards, validation rules, and data governance practices to ensure data accuracy, consistency, and compliance with regulations. Drive initiatives to improve data quality and documentation of data assets. Mentorship and Leadership: Provide technical leadership and mentorship to junior team members, assisting in their skill development and growth. Lead and participate in code reviews, ensuring best practices and high-quality code. Collaboration and Stakeholder Management: Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand their data needs and deliver solutions that meet those needs. Communicate effectively with non-technical stakeholders to translate technical concepts into actionable insights and business value. Performance Monitoring and Optimization: Implement monitoring systems and practices to track data pipeline performance, identify bottlenecks, and optimize for improved efficiency and scalability. Common Software Engineering Requirements You actively contribute to the end-to-end delivery of complex software applications, ensuring adherence to best practices and high overall quality standards. You have a strong understanding of a business or system domain with sufficient knowledge & expertise around the appropriate metrics and trends. You collaborate closely with product managers, designers, and fellow engineers to understand business needs and translate them into effective software solutions. You provide technical leadership and expertise, guiding the team in making sound architectural decisions and solving challenging technical problems. Your solutions anticipate scale, reliability, monitoring, integration, and extensibility. You conduct code reviews and provide constructive feedback to ensure code quality, performance, and maintainability. You mentor and coach junior engineers, fostering a culture of continuous learning, growth, and technical excellence within the team. You play a significant role in the ongoing evolution and refinement of current tools and applications used by the team, and drive adoption of new practices within your team. You take ownership of (customer) issues, including initial troubleshooting, identification of root cause and issue escalation or resolution, while maintaining the overall reliability and performance of our systems. You set the benchmark for responsiveness and ownership and overall accountability of engineering systems. You independently drive and lead multiple features, contribute to (a) large project(s) and lead smaller projects. You can orchestrate work that spans multiples engineers within your team and keep all relevant stakeholders informed. You support your lead/EM about your work and that of the team, that they need to share with the stakeholders, including escalation of issues Requirements Bachelor's or Master's degree in Computer Science, Data Science, or a related field. 5+ years of experience in data engineering, with a focus on data architecture, ETL, and database management. Proficiency in programming languages like Python/Pyspark and Java /Scala Expertise in big data technologies such as Hadoop, Spark, Kafka, etc. In-depth knowledge of SQL and experience with various database technologies (e.g., PostgreSQL, MySQL, NoSQL databases). Experience and expertise in building complex end-to-end data pipelines. Experience with orchestration and designing job schedules using the CICD tools like Jenkins and Airflow. Ability to work in an Agile environment (Scrum, Lean, Kanban, etc) Ability to mentor junior team members. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., AWS Redshift, S3, Azure SQL Data Warehouse). Strong leadership, problem-solving, and decision-making skills. Excellent communication and collaboration abilities. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 day ago

Apply

5.0 years

5 - 9 Lacs

Pune

On-site

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development. Responsibilities Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management. Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting. Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts. Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time. Documentation : Document ticket resolutions, testing protocols, and data validation processes. Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers. Ticket Management: Monitor the Jira ticket queue and respond to tickets as they are raised. Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them. Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues. Troubleshooting and Support: Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics. Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance. Desired Skills & Requirements Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit. Our ideal candidate possesses the following attributes and qualifications: Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments. Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions. Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management. Hands-on experience with PySpark for data processing and automation. Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments. Some experience with Azure DevOps CI/CD IaC and release pipelines. Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills. Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement. Experience with Data Engineering in Microsoft Fabric Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks). Certifications in Azure Data Engineering. Why Join Us? Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless. Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success. Enjoy the flexibility to work from anywhere Work-life balance that suits your lifestyle. Competitive salary and comprehensive benefits package. Career growth and professional development opportunities. A collaborative and inclusive work culture.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver No Performance Parameter Measure 1 Process No. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2 Team Management Productivity, efficiency, absenteeism 3 Capability development Triages completed, Technical Test performance Mandatory Skills: PySpark. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Databricks Lead Job Description : Databricks Lead Should have 6 years of experience Must have skills – DataBricks, Delta Lake, pyspark or scala spark, unity catalog. Good to have skills - Azure/AWS Cloud skills To ingest and transform batch and streaming data on the Databricks Lakehouse Platform. Excellent communication skill Mandatory Skills: DataBricks - Data Engineering. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team You will be an expert contributor and part of the Rating Organization’s Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organization’s critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities Responsibilities: Design and implement innovative software solutions to enhance S&P Ratings' cloud-based data platforms. Mentor a team of engineers fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications Bachelor’s degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development 7+ years of development experience in enterprise products, modern web development technologies Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312486 Posted On: 2025-05-14 Location: Mumbai, Maharashtra, India Show more Show less

Posted 1 day ago

Apply

12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 12 The Team You will be an expert contributor and part of the Rating Organization’s Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organization’s critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications Bachelor’s degree in computer science, Information Systems, Engineering, equivalent or more is required Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development Total 12+ years of experience with 8+ years designing enterprise products, modern data stacks and analytics platforms 6+ years of hands-on experience contributing to application architecture & designs, proven software/enterprise integration design patterns and full-stack knowledge including modern distributed front end and back-end technology stacks 5+ years full stack development experience in modern web development technologies, Java/J2EE, UI frameworks like Angular, React, SQL, Oracle, NoSQL Databases like MongoDB Exp. with Delta Lake systems like Databricks using AWS cloud technologies and PySpark is a plus Experience designing transactional/data warehouse/data lake and data integrations with Big data eco system leveraging AWS cloud technologies Thorough understanding of distributed computing Passionate, smart, and articulate developer Quality first mindset with a strong background and experience with developing products for a global audience at scale Excellent analytical thinking, interpersonal, oral and written communication skills with strong ability to influence both IT and business partners Superior knowledge of system architecture, object-oriented design, and design patterns. Good work ethic, self-starter, and results-oriented Excellent communication skills are essential, with strong verbal and writing proficiencies Additional Preferred Qualifications Experience working AWS Experience with SAFe Agile Framework Bachelor's/PG degree in Computer Science, Information Systems or equivalent. Hands-on experience contributing to application architecture & designs, proven software/enterprise integration design principles Ability to prioritize and manage work to critical project timelines in a fast-paced environment Excellent Analytical and communication skills are essential, with strong verbal and writing proficiencies Ability to train and mentor About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 312492 Posted On: 2025-05-14 Location: Mumbai, Maharashtra, India Show more Show less

Posted 1 day ago

Apply

1.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description: Location: Indore, Noida, Pune and Bengaluru Qualifications: BE/B.Tech/MCA/M.Tech/M.Com in Computer Science or related field Required Skills: EDW Expertise: Hands-on experience with Teradata or Oracle. PL/SQL Proficiency: Strong ability to write complex queries. Performance Tuning: Expertise in optimizing queries to meet SLA requirements. Communication: Strong verbal and written communication skills. Experience Required (1-3 Years) Preferred Skills: Cloud Technologies: Working knowledge of AWS S3 and Redshift or equivalent. Database Migration: Familiarity with database migration processes. Big Data Tools: Understanding of SparkQL, and PySpark. Programming: Experience with Python for data processing and analytics. Data Management: Experience with import/export operations. Roles & Responsibilities Module Ownership: Manage a module and assist the team. Optimized PL/SQL Development: Write efficient queries. Performance Tuning: Improve database speed and efficiency. Requirement Analysis: Work with business users to refine needs. Application Development: Build solutions using complex SQL queries. Data Validation: Ensure integrity of large datasets (TB/PB). Testing & Debugging: Conduct unit testing and fix issues. Database Strategies: Apply best practices for development. Interested candidates can share their resumes at anubhav.pathania@impetus.com Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Data Architecture Principles Good to have skills : PySpark Minimum 15 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. No location flex other than Chennai and Bengaluru Roles & Responsibilities: - Expected to be a SME with deep knowledge and experience. - Should have Influencing and Advisory skills. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Lead the application development team in designing and implementing software solutions. - Collaborate with stakeholders to gather requirements and define project scope. - Provide technical guidance and mentorship to team members. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Architecture Principles. - Good To Have Skills: Experience with PySpark. - Strong understanding of data architecture principles and best practices. - Experience in designing and implementing data solutions. - Knowledge of data modeling and database design. - Ability to analyze complex data sets and provide insights. Additional Information: - The candidate should have a minimum of 15 years of experience in Data Architecture Principles. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We deliver the world’s most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role Develop and implement data pipelines for ingesting and collecting data from various sources into a centralized data platform. Develop and maintain ETL jobs using AWS Glue services to process and transform data at scale. Optimize and troubleshoot AWS Glue jobs for performance and reliability. Utilize Python and PySpark to efficiently handle large volumes of data during the ingestion process. Collaborate with data architects to design and implement data models that support business requirements. Create and maintain ETL processes using Airflow, Python and PySpark to move and transform data between different systems. Implement monitoring solutions to track data pipeline performance and proactively identify and address issues. Manage and optimize databases, both SQL and NoSQL, to support data storage and retrieval needs. Familiarity with Infrastructure as Code (IaC) tools like Terraform, AWS CDK and others. Proficiency in event-driven integrations, batch-based and API-led data integrations. Proficiency in CICD pipelines such as Azure DevOps, AWS pipelines or Github Actions. About You To be considered for this role it is envisaged you will possess the following attributes: Technical and Industry Experience: Independent Integration Developer with over 5+ years of experience in developing and delivering integration projects in an agile or waterfall-based project environment. Proficiency in Python, PySpark and SQL programming language for data manipulation and pipeline development Hands-on experience with AWS Glue, Airflow, Dynamo DB, Redshift, S3 buckets, Event-Grid, and other AWS services Experience implementing CI/CD pipelines, including data testing practices. Proficient in Swagger, JSON, XML, SOAP and REST based web service development Behaviors Required: Driven by our values and purpose in everything we do. Visible, active, hands on approach to help teams be successful. Strong proactive planning ability. Optimistic, energetic, problem solver, ability to see long term business outcomes. Collaborative, ability to listen, compromise to make progress. Stronger together mindset, with a focus on innovation & creation of tangible / realized value. Challenge status quo. Education – Qualifications, Accreditation, Training: Degree in Computer Science and/or related fields AWS Data Engineering certifications desirable Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Mumbai Job Digital Solutions Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jun 4, 2025 Unposting Date Jul 4, 2025 Reporting Manager Title Director Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, PySpark, Core Banking Good to have skills : AWS BigData Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark. - Good To Have Skills: Experience with AWS BigData. - Strong understanding of data modeling and database design principles. - Experience with data integration tools and ETL processes. - Familiarity with data governance and data quality frameworks. Additional Information: - The candidate should have minimum 5 years of experience in Snowflake Data Warehouse. - This position is based in Pune. - A 15 years full time education is required. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Microsoft Azure Databricks, Microsoft Azure Data Services Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop and enhance applications to align with business needs. Roles & Responsibilities: - Expected to be an SME - Collaborate and manage the team to perform - Responsible for team decisions - Engage with multiple teams and contribute on key decisions - Provide solutions to problems for their immediate team and across multiple teams - Lead the application development process - Implement best practices for application design and development - Conduct code reviews and ensure code quality Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark - Good To Have Skills: Experience with Microsoft Azure Databricks, Microsoft Azure Data Services - Strong understanding of distributed computing and data processing - Experience in building scalable and efficient data pipelines - Proficient in data manipulation and transformation using PySpark Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark - This position is based at our Bhubaneswar office - A 15 years full-time education is required Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: Please apply only if you have 6 years or more of relevant experience in Data Science (excluding internship) Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do As a GenAI Engineer, you will play a pivotal role in designing and deploying data-driven and GenAI-powered solutions. Your responsibilities will include: Analyzing large sets of structured and unstructured data to extract meaningful insights and drive business impact. Designing and developing Machine Learning models, including regression, time series forecasting, clustering, classification, and NLP. Building, fine-tuning, and deploying Large Language Models (LLMs) such as GPT, BERT, or LLaMA for tasks like text summarization, generation, and classification. Working with Hugging Face Transformers, LangChain, and vector databases (e.g., FAISS, Pinecone) to develop scalable GenAI pipelines. Applying prompt engineering techniques and Reinforcement Learning with Human Feedback (RLHF) to optimize GenAI applications. Building and deploying models using Python, R, TensorFlow, PyTorch, and Scikit-learn within production-ready environments like Flask, Azure Functions, and AWS Lambda. Developing and maintaining scalable data pipelines in collaboration with data engineers. Implementing solutions on cloud platforms like AWS, Azure, or GCP for scalable and high-performance AI/ML applications. Enhancing BI and visualization tools such as Tableau, Power BI, Qlik, and Plotly to communicate data insights effectively. Collaborating with stakeholders to translate business challenges into GenAI/data science problems and actionable solutions. Staying updated on emerging GenAI and AI/ML technologies and incorporating best practices into projects. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related field. Technical Expertise: 6+ years of hands-on experience in applying Machine Learning techniques (clustering, classification, regression, NLP). Strong proficiency in Python and SQL, with experience in frameworks like Flask or Django. Expertise in Big Data environments using PySpark. Deep understanding of ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Hands-on experience with Hugging Face Transformers, OpenAI API, or similar GenAI libraries. Knowledge of vector databases and retrieval-augmented generation (RAG) techniques. Proficiency in cloud-based AI/ML deployment on AWS, Azure, or GCP. Experience in Docker and containerization for ML model deployment. Knowledge of code management methodologies and best practices for implementing scalable ML/GenAI solutions. Extra Skills: Experience in Deep Learning and Reinforcement Learning. Hands-on experience with NLP, Text Mining, and LLM architectures. Experience in business intelligence and data visualization tools (Tableau, Power BI, Qlik). Experience with prompt engineering and fine-tuning LLMs for production use cases. Ability to effectively communicate insights and translate technical work into business value. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration. Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and career growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: GenAI Engineer. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Company Description ThreatXIntel is a startup cyber security company specializing in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We offer customized, affordable solutions tailored to meet the specific needs of businesses of all sizes. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are looking for a skilled freelance Data Engineer with expertise in PySpark and AWS data services , particularly S3 and Redshift . Familiarity with Salesforce data integration is a plus. This role focuses on building scalable data pipelines and supporting analytics use cases in a cloud-native environment. Key Responsibilities Design and develop ETL/ELT data pipelines using PySpark for large-scale data processing Ingest, transform, and store data across AWS S3 (data lake) and Amazon Redshift (data warehouse) Integrate data from Salesforce into the cloud data ecosystem for analysis Optimize data workflows for performance and cost-efficiency Write efficient code and queries for structured and unstructured data Collaborate with analysts and stakeholders to deliver clean, usable datasets Required Skills Strong hands-on experience with PySpark Proficient in AWS services, especially S3 and Redshift Basic working knowledge of Salesforce data structure or API Ability to write complex SQL for data transformation and reporting Familiarity with version control and Agile collaboration tools Good communication and documentation skills Show more Show less

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies