Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What Youll Do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Work on one or more projects, making contributions to unfamiliar code written by team members. Diagnose and resolve performance issues. Participate in the estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Document code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Write, debug, and troubleshoot code in mainstream open source technologies Lead effort for Sprint deliverables, and solve problems with medium complexity Research, create, and develop software applications to extend and improve on Equifax Solutions Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor&aposs degree or equivalent experience 5+ years working experience software development using multiple versions of Python. Experience and familarity with the various Python frameworks currently in use to leverage software development processes. Develop, test, and deploy high-quality Python code for AI/ML applications, data pipelines, and backend services. Design, implement, and optimizem Machine Learning models and algorithms for various business problems. Collaborate with data scientists to transition experimental models into production-ready systems. Build and maintain robust data ingestion and processing pipelines to feed data into ML models. Perform code reviews, provide constructive feedback, and ensure adherence to best coding practices. Troubleshoot, debug, and optimize existing ML systems and applications for performance and scalability. Stay up-to-date with the latest advancements in Python, machine learning, and related technologies. Document technical designs, processes, and operational procedures. Experience with Cloud technology: GCP or AWS What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others. Source code control management systems (e.g. Git, Github). Agile environments (e.g. Scrum, XP). Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern Python versions Show more Show less
Posted 15 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 16 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 16 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 16 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. Ensure seamless data flow from various sources to destinations including ADLS Gen 2. Implement data quality checks and validation frameworks. Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. Manage version control and deployment pipelines using Git and DevOps best practices. Provide accurate effort estimation and manage project timelines effectively. Collaborate with cross-functional teams to ensure aligned project goals and objectives. Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. Facilitate discussions between technical and non-technical stakeholders to drive project success. Document technical solutions and design specifications clearly and concisely. Qualifications: Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. 8+ years of experience in big data architecture and engineering. Extensive experience with PySpark, SparkSQL, and Databricks on Azure. Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. Strong experience in data modeling, metadata frameworks, and effort estimation. Experience of DevSecOps practices with proficiency in Git. Demonstrated experience in implementing data quality, data security, and data governance measures. Industry experience in banking, insurance, or pharma is a significant plus. Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. Certification in Azure, Databricks or related Cloud technologies is a must. Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory Skill Sets Data Architect/Data Engineer/AWS Preferred Skill Sets Data Architect/Data Engineer/AWS Years Of Experience Required 8--12 years Education Qualification B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 16 hours ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 16 hours ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 16 hours ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Data Architect – Databricks (Azure/AWS) Role Overview: We are seeking an experienced Data Architect specializing in Databricks to lead the architecture, design, and migration of enterprise data workloads from on-premises systems (e.g., Oracle, Exadata, Hadoop) to Databricks on Azure or AWS . The role involves designing scalable, secure, and high-performing data platforms based on the medallion architecture (bronze, silver, gold layers), supporting large-scale ingestion, transformation, and publishing of data. Required Skills and Experience: 8+ years of experience in data architecture or engineering roles, with at least 3+ years specializing in cloud-based big data solutions. Hands-on expertise with Databricks on Azure or AWS. Deep understanding of Delta Lake, medallion architecture (bronze/silver/gold zones), and data governance tools (e.g., Unity Catalog, Purview). Strong experience migrating large datasets and batch/streaming pipelines from on-prem to Databricks. Expertise with Spark (PySpark/Scala) at scale and optimizing Spark jobs. Familiarity with ingestion from RDBMS (Oracle, SQL Server) and legacy Hadoop ecosystems. Proficiency in orchestration tools (Databricks Workflows, Airflow, Azure Data Factory, AWS Glue Workflows). Strong understanding of cloud-native services for storage, compute, security, and networking. Preferred Qualifications: Databricks Certified Data Engineer or Architect. Azure/AWS cloud certifications. Experience with real-time/streaming ingestion (Kafka, Event Hubs, Kinesis). Familiarity with data quality frameworks (e.g., Deequ, Great Expectations). Responsibilities Key Responsibilities: Define and design cloud-native data architecture on Databricks using Delta Lake, Unity Catalog, and related services. Develop migration strategies for moving on-premises data workloads (Oracle, Hadoop, Exadata, etc.) to Databricks on Azure/AWS. Architect and oversee data pipelines supporting ingestion, curation, transformation, and analytics in a multi-layered (bronze/silver/gold) model. Lead data modeling, schema design, performance optimization, and data governance best practices. Collaborate with data engineering, platform, and security teams to build production-ready solutions. Create standards for ingestion frameworks, job orchestration (e.g., workflows, Airflow), and data quality validation. Support cost optimization, scalability design, and operational monitoring frameworks. Guide and mentor engineering teams during the build and migration phases. Attributes for Success: Ability to lead architecture discussions with technical and business stakeholders. Passion for modern cloud data architectures and continuous learning. Pragmatic and solution-driven approach to migrations. Diversity and Inclusion : An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Senior Data Engineer – Databricks (Azure/AWS) Role Overview: We are looking for a hands-on Senior Data Engineer experienced in migrating and building large-scale data pipelines on Databricks using Azure or AWS platforms. The role will focus on implementing batch and streaming pipelines, applying the bronze-silver-gold data lakehouse model, and ensuring scalable and reliable data solutions. Required Skills and Experience: 6+ years of hands-on data engineering experience, with 2+ years specifically working on Databricks in Azure or AWS. Proficiency in building and optimizing Spark pipelines (batch and streaming). Strong experience implementing bronze/silver/gold data models. Working knowledge of cloud storage systems (ADLS, S3) and compute services. Experience migrating data from RDBMS (Oracle, SQL Server) or Hadoop ecosystems. Familiarity with Airflow, Azure Data Factory, or AWS Glue for orchestration. Good scripting skills (Python, Scala, SQL) and version control (Git). Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with Delta Live Tables (DLT) and Databricks SQL. Understanding of cloud security best practices (IAM roles, encryption, ACLs). Responsibilities Key Responsibilities: Design, develop, and operationalize scalable data pipelines on Databricks following medallion architecture principles. Migrate and transform large data volumes from traditional on-prem systems (Oracle, Hadoop, Exadata) into cloud data platforms. Develop efficient Spark (PySpark/Scala) jobs for ingestion, transformation, aggregation, and publishing of data. Implement data quality checks, error handling, retries, and data validation frameworks. Build automation scripts and CI/CD pipelines for Databricks workflows and deployment. Tune Spark jobs and optimize cost and performance in cloud environments. Collaborate with data architects, product owners, and analytics teams. Attributes for Success: Strong analytical and problem-solving skills. Attention to scalability, resilience, and cost efficiency. Collaborative attitude and passion for clean, maintainable code. Diversity and Inclusion: An Oracle career can span industries, roles, Countries, and cultures, allowing you to flourish in new roles and innovate while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. To nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, and interview process, and in potential roles. To perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 17 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
5+ years hands-on experience in writing automated python scripts following best practices approach for creating Airflow DAG's. Must have good python practical knowledge in using key modules such as Pandas/Numpy along with OOP’s concepts. Ability to analyze and diagnose issues and suggest recommendations following python best practices. Basic knowledge in using Git/Gitlab to manage Airflow platform code and work on enhancements as and when needed. Basic knowledge in AWS Data Engineering services AWS S3, IAM, Glue, Lambda & Step functions. Good to have knowledge in working with Airflow & its internal components. SAP knowledge (SD/MM/FICO) or experience in any ERP module will be given added weightage. A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem-solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Location of posting - Infosys Ltd. is committed to ensuring you have the best experience throughout your journey with us. We currently have open positions in a number of locations across India - Bangalore, Pune, Hyderabad, Mysore, Kolkata, Chennai, Chandigarh, Trivandrum, Indore, Nagpur, Mangalore, Noida, Bhubaneswar, Coimbatore, Jaipur, Hubli, Vizag. While we work in accordance with business requirements, we shall strive to offer you the location of your choice, where possible.
Posted 18 hours ago
5.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
JOB DESCRIPTION -DATA SCIENTIST Role: Data Scientist Experience: 5 to 10 Years Work Mode: Remote Immediate Joiners Preferred Required Skills & Qualification: An ideal candidate will have experience, as we are building an AI-powered workforce intelligence platform that helps businesses optimize talent strategies, enhance decision-making, and drive operational efficiency. Our software leverages cutting-edge AI, NLP, and data science to extract meaningful insights from vast amounts of structured and unstructured workforce data. As part of our new AI team, you will have the opportunity to work on real-world AI applications, contribute to innovative NLP solutions, and gain experience in building AI-driven products from the ground up. Required Skills & Qualification • Strong experience in Python programming • 5-10 years of experience in Data Science/NLP (Freshers with strong NLP projects are welcome). • Proficiency in Python, PyTorch, Scikit-learn, and NLP libraries (NLTK, SpaCy, Hugging Face). • Basic knowledge of cloud platforms (AWS, GCP, or Azure). -Retrieval, Machine Learning, Artificial Intelligence, Generative AI, Semantic search, Reranking & evaluating search performance. • Experience with SQL for data manipulation and analysis. • Assist in designing, training, and optimizing ML/NLP models using PyTorch, NLTK, Scikit- learn, and Transformer models (BERT, GPT, etc.). • Familiarity with MLOps tools like Airflow, MLflow, or similar. • Experience with Big Data processing (Spark, Pandas, or Dask). • Help deploy AI/ML solutions on AWS, GCP, or Azure. • Collaborate with engineers to integrate AI models into production systems. • Expertise in using SQL and Python to clean, preprocess, and analyze large datasets. • Learn & Innovate – Stay updated with the latest advancements in NLP, AI, and ML frameworks. • Strong analytical and problem-solving skills. • Willingness to learn, experiment, and take ownership in a fast-paced startup environment. Nice to Have Requirements for the Candidate • Desire to grow within the company • Team player and Quicker learner • Performance-driven • Strong networking and outreach skills • Exploring aptitude & killer attitude • Ability to communicate and collaborate with the team at ease. • Drive to get the results and not let anything get in your way. • Critical and analytical thinking skills, with a keen attention to detail. • Demonstrate ownership and strive for excellence in everything you do. • Demonstrate a high level of curiosity and keep abreast of the latest technologies & tools • Ability to pick up new software easily and represent yourself peers and co-ordinate during meetings with Customers. What We Offer: - We offer a market-leading salary along with a comprehensive benefits package to support your well-being. -Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal wellbeing. -We invest in your career through continuous learning and internal growth opportunities. -Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. -We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. About the Company: https://predigle.com/ https://www.espergroup.com/ Predigle, an EsperGroup company, focuses on building disruptive technology platforms to transform daily business operations. Predigle has expanded rapidly to offer various products and services. Predigle Intelligence (Pi) is a comprehensive portable AI platform that offers a low-code/no-code AI design solution for solving business problems.
Posted 18 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8--12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 19 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8--12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 19 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8--12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 28 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 19 hours ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description: We are seeking a highly skilled and experienced Big Data Architect cum Data Engineer to join our dynamic team. The ideal candidate should have a strong background in big data technologies, designing the solutions and hands-on expertise in PySpark, Databricks, SQL. This position requires significant experience in building and managing data solutions in Databricks on Azure. The candidate should also have strong communication skills, experience in managing mid size teams, client conversations and presenting PoV and thought leadership. Responsibilities: · Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry knowledge in banking, insurance, and pharma to design tailor-made data solutions. · Stay updated with industry trends and innovations to proactively implement cutting-edge technologies and methodologies. · Facilitate discussions between technical and non-technical stakeholders to drive project success. · Document technical solutions and design specifications clearly and concisely. Qualifications: · Bachelor's degree in computer science, Engineering, or a related field. Master’s Degree preferred. · 8+ years of experience in big data architecture and engineering. · Extensive experience with PySpark, SparkSQL, and Databricks on Azure. · Proficient in using Azure Data Lake Storage Gen 2, Azure Data Factory, Azure Event Hub, Synapse. · Strong experience in data modeling, metadata frameworks, and effort estimation. · Experience of DevSecOps practices with proficiency in Git. · Demonstrated experience in implementing data quality, data security, and data governance measures. · Industry experience in banking, insurance, or pharma is a significant plus. · Excellent communication skills, capable of articulating complex technical concepts to diverse audiences. · Certification in Azure, Databricks or related Cloud technologies is a must. · Familiarity with machine learning frameworks and data science methodologies would be preferred. Mandatory skill sets: Data Architect/Data Engineer/AWS Preferred skill sets: Data Architect/Data Engineer/AWS Years of experience required: 8--12 years Education qualification: B.E.(B.Tech)/M.E/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Architecture Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 33 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 19 hours ago
10.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 10+years. Strong working experience in machine learning, with a proven track record of delivering impactful solutions in NLP, machine vision, and AI. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., Pandas, NumPy). Strong understanding of statistical concepts and techniques, and experience applying them to real-world problems. Strong working experience in AWS. Strong programming skills in Python, and proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX, as well as machine learning libraries such as scikit-learn. Familiarity with MLOps tools such as MLflow, Kubeflow, Airflow. Proficient experience with Generative AI frameworks such as GANs, VAEs, prompt engineering, and retrieval-augmented generation (RAG), and the ability to apply them to real-world problems. Hands-on skills in data engineering and building robust ML pipelines. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 20 hours ago
4.0 years
8 - 25 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure. This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines. Key Responsibilities Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation. Implement GenAI use cases using LLMs Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets. Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems. Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience. Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc. Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness. Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning. Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning. Technical Skills Required Core ML & AI Skills (Must-Have) Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization. Experience with real-world model deployment and scaling, not just notebooks or prototypes. Good understanding of ML Ops, model lifecycle, and pipeline orchestration. Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc. SQL proficiency and experience querying large datasets. Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation. Cloud Experience In GCP (preferred), AWS, Or Azure. Cloud & Big Data Stack Hands-on Experience With GCP tools – Vertex AI, Kubeflow, BigQuery, GCS Or equivalent AWS/Azure ML stacks Familiar with Airflow, PySpark, or other pipeline orchestration tools. Experience reading/writing data from/to cloud services. Qualifications Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field. 4+ years of experience in data analytics and machine learning roles. 2+ years of experience in Python or similar programming languages (Java, Scala, Rust). Must have experience deploying and scaling ML models in production. Nice to Have Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures. Background in academic research to production applications. Building APIs and monitoring production ML models. Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory. Communication & Collaboration Strong ability to explain complex models and insights to both technical and non-technical stakeholders. Ask the right questions, clarify objectives, and align analytics with business goals. Comfortable working cross-functionally in agile and collaborative teams. Important Note This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models. Cloud Experience Is Mandatory (GCP Preferred, AWS/Azure Acceptable). Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered. Skills:- Machine Learning (ML), Production management, Large Language Models (LLM), AIML and Google Cloud Platform (GCP)
Posted 21 hours ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name ML Developer Taleo ID Position Level Staff Employment Type Permanent Number of Openings 1 Work Location Kochi, Chennai , Noida , Bangalore , Pune , Kolkata , TVM Position Details As part of EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements (including Experience, Skills And Additional Qualifications) A bachelor’s degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. Technical Skills Requirements Develop and implement machine learning models, including regression, classification (e.g., XGBoost, Random Forest), and clustering techniques. Conduct exploratory data analysis (EDA) to uncover insights and trends within data sets. Apply dimension reduction techniques to improve model performance and interpretability. Utilize statistical models to design and implement effective business solutions. Evaluate and validate models to ensure robustness and reliability. Should have solid background in Python Familiarity with Time Series Forecasting. Basic experience with cloud platforms such as AWS, Azure, or GCP. Exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker) is a plus Additional skill requirements: Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. What we look for A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction, and advisory services, we’re using the finance products, expertise, and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 22 hours ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name ML Developer Taleo ID Position Level Staff Employment Type Permanent Number of Openings 1 Work Location Kochi, Chennai , Noida , Bangalore , Pune , Kolkata , TVM Position Details As part of EY GDS Assurance Digital, you will be responsible for leveraging advanced machine learning techniques to develop innovative, high-impact models and solutions that drive growth and deliver significant business value. You will be helping EY’s sector and service line professionals by developing analytics enabled solutions, integrating data science activities with business relevant aspects to gain insight from data. This is a full-time Machine Learning Developer role, responsible for building and deploying robust machine learning models to solve real-world business problems. You will be working on the entire ML lifecycle, including data analysis, feature engineering, model training, evaluation, and deployment. Requirements (including Experience, Skills And Additional Qualifications) A bachelor’s degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. Technical Skills Requirements Develop and implement machine learning models, including regression, classification (e.g., XGBoost, Random Forest), and clustering techniques. Conduct exploratory data analysis (EDA) to uncover insights and trends within data sets. Apply dimension reduction techniques to improve model performance and interpretability. Utilize statistical models to design and implement effective business solutions. Evaluate and validate models to ensure robustness and reliability. Should have solid background in Python Familiarity with Time Series Forecasting. Basic experience with cloud platforms such as AWS, Azure, or GCP. Exposure to ML Ops tools and practices (e.g., MLflow, Airflow, Docker) is a plus Additional skill requirements: Proficient at quickly understanding complex machine learning concepts and utilizing technology for tasks such as data modeling, analysis, visualization, and process automation. Skilled in selecting and applying the most suitable standards, methods, tools, and frameworks for specific ML tasks and use cases. Capable of collaborating effectively within cross-functional teams, while also being able to work independently on complex ML projects. Demonstrates a strong analytical mindset and systematic approach to solving machine learning challenges. Excellent communication skills, able to present complex technical concepts clearly to both technical and non-technical audiences. What we look for A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of market-leading, multi-disciplinary team of 7200 + professionals, in the only integrated global assurance business worldwide. Opportunities to work with EY GDS Assurance practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from startups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction, and advisory services, we’re using the finance products, expertise, and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities, and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY provides equal employment opportunities to applicants and employees without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 22 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough