Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a highly experienced and innovative AI/ML Engineer to lead the design, development, and deployment of scalable machine learning solutions. The ideal candidate will have deep expertise in Python, Apache Spark, MLOps, and cloud platforms (GCP and AWS), along with experience in distributed query engines like Trino. You will play a key role in building intelligent systems that drive business value through data science and machine learning. Job Summary: We are seeking a highly experienced and innovative AI/ML Engineer to lead the design, development, and deployment of scalable machine learning solutions. The ideal candidate will have deep expertise in Python , Apache Spark , MLOps , and cloud platforms (GCP and AWS), along with experience in distributed query engines like Trino . You will play a key role in building intelligent systems that drive business value through data science and machine learning. Key Responsibilities: Design and implement scalable ML pipelines using Spark , Python , and MLOps best practices. Develop, train, and deploy machine learning models in production environments. Collaborate with data scientists, data engineers, and product teams to translate business problems into ML solutions. Optimize model performance and ensure reproducibility, versioning, and monitoring using MLOps tools and frameworks. Work with Trino and other distributed query engines for efficient data access and feature engineering. Deploy and manage ML workloads on GCP and AWS , leveraging services like SageMaker, Vertex AI, BigQuery, and EMR. Implement CI/CD pipelines for ML workflows and ensure compliance with data governance and security standards. Mentor junior engineers and contribute to the development of best practices and technical standards. Required Skills: Strong programming skills in Python with experience in ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Expertise in Apache Spark for large-scale data processing. Solid understanding of MLOps practices including model versioning, monitoring, and deployment. Experience with Trino or similar distributed SQL engines (e.g., Presto, Hive). Hands-on experience with GCP (e.g., Vertex AI, BigQuery) and AWS (e.g., SageMaker, S3, Lambda). Familiarity with containerization (Docker) and orchestration (Kubernetes). Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Preferred Qualifications: Master s or PhD in Computer Science, Data Science, or a related field. Experience with feature stores, model registries, and ML observability tools. Knowledge of data privacy, security, and compliance in ML systems. Contributions to open-source ML or data engineering projects. Impact Youll Make: NA This is a hybrid position and involves regular performance of job responsibilities virtually as well as in-person at an assigned TU office location for a minimum of two days a week. TransUnion Job Title Sr Developer, Applications Development
Posted 1 week ago
4.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
;:" Your responsibilities The Senior Data Scientist will lead the development of data-driven solutions by leveraging traditional data science techniques and recent advancements in Generative AI to bring value to ADM. The role is integral to the Digital & Innovation team, driving rapid prototyping efforts, collaborating with cross-functional teams, and developing innovative approaches to solve business problems. This position requires a blend of expertise in traditional machine learning models, data science practices, and emerging AI technologies to create value and improve business outcomes. Key Responsibilities: Lead end-to-end machine learning projects, from data exploration, modeling, and deployment, ensuring alignment with business objectives. Utilize traditional AI/data science methods (e.g., regression, classification, clustering) and advanced AI methods (e.g., neural networks, NLP) to address business problems and optimize processes. Implement and experiment with Generative AI models based on business needs using Prompt Engineering, Retrieval Augmented Generation (RAG) or Finetuning, using LLM\u0027s, LVM\u0027s, TTS etc. Collaborate with teams across Digital & Innovation, business stakeholders, software engineers, and product teams, to rapidly prototype and iterate on new models and solutions. Mentor and coach junior data scientists and analysts, fostering an environment of continuous learning and collaboration. Adapt quickly to new AI advancements and technologies, continuously learning and applying emerging methodologies to solve complex problems. Work closely with other teams (e.g., Cybersecurity, Cloud Engineering) to ensure the successful integration of models into production systems. Ensure models meet rigorous performance, accuracy, and efficiency standards, performing cross-validation, tuning, and statistical checks. Communicate results and insights effectively to both technical and non-technical stakeholders, delivering clear recommendations for business impact. Ensure adherence to data privacy, security policies, and governance standards across all data science initiatives.
Posted 1 week ago
10.0 - 15.0 years
35 - 40 Lacs
Chennai
Work from Office
Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? This role will be part of the Treasury Applications Platform team. We are currently modernizing our platform, migrating it to GCP. You will contribute towards making the platform more resilient and secure for future regulatory requirements and ensuring compliance and adherence to Federal Regulations. Preferably a BS or MS degree in computer science, computer engineering, or other technical discipline 10+ years of software development experience Ability to effectively interpret technical and business objectives and challenges and articulate solutions Willingness to learn new technologies and exploit them to their optimal potential Strong experience Finance, Controllership, Treasury Applications Strong background with Java, Python, Pyspark, SQL, Concurrency/parallelism, oracle, big data, in-memory computing platforms Cloud experience with GCP would be a preference Conduct IT requirements gathering. Define problems and provide solution alternatives. Solution Architecture and system design. Create detailed system design documentation. Implement deployment plans. Understand business requirements with the objective of providing high-quality IT solutions. Support team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation, design and deployment. Under supervision participate in unit-level and organizational initiatives with the objective of providing high-quality and value adding consulting solutions. Troubleshoot issues, diagnose problems, and conduct root-cause analysis. Perform secondary research as instructed by supervisor to assist in strategy and business planning. Minimum Qualifications: Strong experience with Cloud architecture Deep understanding of SDLC, OOAD, CI/CD, Containerization, Agile, Java, PL/SQL Preferred Qualifications: GCP Big data processing systems Finance Treasury Cash Management Kotlin experience Kafka Open Telemetry Network We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally:
Posted 1 week ago
7.0 - 12.0 years
25 - 30 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at OCI to develop, build, deploy, and troubleshoot software. Qualifications 7+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc Good understanding of versioning tools like Git/SVN.
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Title: Data/AI Engineer GenAI & Agentic AI Integration (Azure) Location: Bangalore, India Job Type: Full-Time About the Role We are seeking a highly skilled Data/AI Engineer to join our dynamic team, specializing in integrating cutting-edge Generative AI (GenAI) and Agentic AI solutions within the Azure cloud environment. The ideal candidate will have a strong background in Python, data engineering, and AI model integration, with hands-on experience working on Databricks, Snowflake, Azure Storage, and Palantir platforms. You will play a crucial role in designing, developing, and deploying scalable data and AI pipelines that power next-generation intelligent applications. Key Responsibilities Design, develop, and maintain robust data pipelines and AI integration solutions using Python on Azure Databricks. Integrate Generative AI and Agentic AI models into existing and new workflows to drive business innovation and automation. Collaborate with data scientists, AI researchers, software engineers, and product teams to deliver scalable and efficient AI-powered solutions. Orchestrate data movement and transformation across Azure-native services including Azure Databricks, Azure Storage (Blob, Data Lake), and Snowflake, ensuring data quality, security, and compliance. Integrate enterprise data using Palantir Foundry and leverage Azure services for end-to-end solutions. Develop and implement APIs and services to facilitate seamless AI model deployment and integration. Optimize data workflows for performance and scalability within Azure. Monitor, troubleshoot, and resolve issues related to data and AI pipeline performance. Document architecture, designs, and processes for knowledge sharing and operational excellence. Stay current with advances in GenAI, Agentic AI, Azure data engineering best practices, and cloud technologies. Required Qualifications Bachelor s or Master s degree in Computer Science, Engineering, Data Science, or a related field (or equivalent practical experience). 5+ years of professional experience in data engineering or AI engineering roles. Strong proficiency in Python for data processing, automation, and AI model integration. Hands-on experience with Azure Databricks and Spark for large-scale data engineering. Proficiency in working with Snowflake for cloud data warehousing. In-depth experience with Azure Storage solutions (Blob, Data Lake) for data ingestion and management. Familiarity with Palantir Foundry or similar enterprise data integration platforms. Demonstrated experience integrating and deploying GenAI or Agentic AI models in production environments. Knowledge of API development and integration for AI and data services. Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. Excellent communication and documentation skills. Preferred Qualifications Experience with Azure Machine Learning, Azure Synapse Analytics, and other Azure AI/data services. Experience with MLOps, model monitoring, and automated deployment pipelines in Azure. Exposure to data governance, privacy, and security best practices. Experience with visualization tools and dashboard development. Knowledge of advanced AI model architectures, including LLMs and agent-based systems. #DataEngineer Job ID R-75732 Date posted 07/24/2025
Posted 1 week ago
10.0 - 15.0 years
14 - 19 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Explain complex technologies in simple terms to clients, peers, and management Work in tandem with our engineering team to identify and implement the most optimal cloud-based solutions for the company Identifying appropriate cloud services to support applications on the cloud Define and document best practices and strategies regarding application deployment in cloud and its maintenance Provide guidance, thought leadership, and mentorship to developer teams to build their cloud competencies Ensure cloud environments are in accordance with company security guidelines Orchestrating and automating cloud-based platforms throughout the company Analyze the usage of cloud services and implementing cost-saving strategies Analyze cloud hosted application performance, uptime, scalability and \u00A0maintaining high standards for code quality and thoughtful design Stay current with industry trends, making recommendations as needed to help the organization innovate and excel. Working with Cloud Transformation team to build migration strategies for various migration use cases.\u00A0 \u00A0 Qualification & Experience \u00A0 Having a basic understanding or exposure to AI tools would be a plus. \u00A0 Expert in Cloud networking. Expertise in connectivity with cloud and On prem data centres. Expert in Global routing, DNS and Network Segregation capabilities At least 10+ Years\u2019 Experience in IT /Cloud Infrastructure (Architect. /SME /Lead) 5+ Years\u2019 Experience in (Cloud Technologies) AWS Certified Solution Architect - Associate Azure Certified CKA \u2013 Good to have \u00A0 Strong experience in working with Azure and AWS cloud services Strong experience in working with EKS and AKS Strong experience in working with Cloud monitoring, logging, resource management and cost controls\u00A0 Cloud Database experience, including knowledge of SQL and NoSQL, and related data stores such as Postgres. Strong awareness of networking concept including best practices for Cloud connectivity, TCP/IP, DNS, SMTP, HTTP and distributed networks. Strong understanding & experience in setting-up highly resilient Application\u00A0 Strong understanding & experience on DR design and setup Strong awareness of cloud security concepts and services Strong awareness of various cloud Migration phases and 6R migration approaches Good analytics skill to Identifying potential bottlenecks in applications\u2019 performance Maintaining data integrity by implementing proper access control for cloud services Functional / Domain (e.g. Underwriting, Claims Mgmt.) \u00A0 Understanding and experience with the all the pillars of a well-architected framework Experience in IT Infrastructure Automation and Infrastructure as Code e.g. ANSIBLE, TERRAFORM etc., Knowledge on technology trends and best practices Experience in the use of Architectural Design Software \u2013 MS Visio, ArchiMate etc., Experience in working in a highly diverse, multi-national, multi-cultural environment Main tasks (any special / short term tasks that are occasional) At the direction of lead architects, develop and implement technical efforts to design, build, and deploy cloud applications, including large-scale data processing, computationally intensive statistical modeling, and advanced analytics Participate in all aspects of the software development lifecycle for cloud solutions, including planning, requirements, development, testing, and quality assurance Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures Educate teams on the implementation of new cloud-based initiatives, providing associated training when necessary Demonstrate exceptional problem-solving skills, with an ability to see and solve issues before they affect business productivity Serves as technology expert on delivering Technical support service Works on all stages of the product life cycle from requirements through design, implementation, and into support. Helps client set the companys strategic technology direction based on experience and real-time input from users. Evaluates technologies to determine strengths and weaknesses in architecture, implementation, and suitability. Makes recommendations consistent with the vision of the business area/enterprise
Posted 1 week ago
7.0 - 12.0 years
35 - 40 Lacs
Chennai
Work from Office
Your work days are brighter here. About the Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About the Role As part of Workday s Prism Analytics team, you will be responsible for the integration of our Big Data Analytics stack with Workdays cloud infrastructure. You will work on building, improving and extending large-scale distributed data processing frameworks like Spark, Hadoop, and YARN in a multi-tenanted cloud environment. You will also be responsible for developing techniques for cluster management, high availability and disaster recovery of the Analytics Platform, Hadoop and Spark services. You will also engineer smart tools and frameworks that provide easy monitoring, troubleshooting, and manageability of our cloud-based analytic services. About You You are an engineer who is passionate about developing distributed applications in a multi-tenanted cloud environment. You take pride in developing distributed systems techniques to coordinate application services, ensuring the application remains highly available and working on disaster recovery for the applications in the cloud. You think not only about what is valuable for the development of right abstractions and modules but also about programmatic interfaces to enable customer success. You also excel in the ability to balance priorities and make the right tradeoffs in feature content and timely delivery of features while ensuring customer success and technology leadership for the company. You can make all of this happen using Java, Spark, and related Hadoop technologies. Basic Qualifications 7+ years of software engineering experience. At least 5+ years of software development experience (using Java, Scala or other languages) with deep Linux/Unix expertise. Other Qualifications Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Experience with managing big data frameworks like Spark and/or Hadoop. Understanding of resource management using YARN, Kubernetes, etc. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 1 week ago
4.0 - 9.0 years
35 - 40 Lacs
Chennai
Work from Office
Your work days are brighter here. About the Team Workday Prism Analytics is a self-service analytics solution for Finance and Human Resources teams that allows companies to bring external data into Workday, combine it with existing people or financial data, and present it via Workday s reporting framework. This gives the end user a comprehensive collection of insights that can be carried out in a flash. We design, build and maintain the data warehousing systems that underpin our Analytics products. We straddle both applications and systems, the ideal candidate for this role is someone who has a passion for solving hyper scale engineering challenges to serve the largest companies on the planet. About the Role As part of Workday s Prism Analytics team, you will be responsible for the integration of our Big Data Analytics stack with Workdays cloud infrastructure. You will work on building, improving and extending large-scale distributed data processing frameworks like Spark, Hadoop, and YARN in a multi-tenanted cloud environment. You will also be responsible for developing techniques for cluster management, high availability and disaster recovery of the Analytics Platform, Hadoop and Spark services. You will also engineer smart tools and frameworks that provide easy monitoring, troubleshooting, and manageability of our cloud-based analytic services. About You You are an engineer who is passionate about developing distributed applications in a multi-tenanted cloud environment. You take pride in developing distributed systems techniques to coordinate application services, ensuring the application remains highly available and working on disaster recovery for the applications in the cloud. You think not only about what is valuable for the development of right abstractions and modules but also about programmatic interfaces to enable customer success. You also excel in the ability to balance priorities and make the right tradeoffs in feature content and timely delivery of features while ensuring customer success and technology leadership for the company. You can make all of this happen using Java, Spark, and related Hadoop technologies. Basic Qualifications At least 4+ years of software development experience (using Java, Scala or other languages) with deep Linux/Unix expertise. Other Qualifications Experience in building Highly Available, Scalable, Reliable multi-tenanted big data applications on Cloud (AWS, GCP) and/or Data Center architectures. Working knowledge of distributed system principles. Understanding of big data frameworks like Spark and/or Hadoop. Understanding of resource management using YARN, Kubernetes, etc. Pursuant to applicable Fair Chance law, Workday will consider for employment qualified applicants with arrest and conviction records. Workday is an Equal Opportunity Employer including individuals with disabilities and protected veterans. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process!
Posted 1 week ago
3.0 - 4.0 years
25 - 30 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Precisely is the leader in data integrity. We empower businesses to make more confident decisions based on trusted data through a unique combination of software, data enrichment products and strategic services. What does this mean to you? For starters, it means joining a company focused on delivering outstanding innovation and support that helps customers increase revenue, lower costs and reduce risk. In fact, Precisely powers better decisions for more than 12,000 global organizations, including 93 of the Fortune 100. Preciselys 2500 employees are unified by four company core values that are central to who we are and how we operate: Openness, Determination, Individuality, and Collaboration. We are committed to career development for our employees and offer opportunities for growth, learning and building community. With a "work from anywhere" culture, we celebrate diversity in a distributed environment with a presence in 30 countries as well as 20 offices in over 5 continents. Learn more about why its an exciting time to join Precisely! Overview: As a Data Engineer I, this role combines expertise in demography with strong programming skills to create tools that help organizations understand population trends, social dynamics, and economic factors. The ideal candidate will work closely with data scientists, statisticians, and policy analysts to design, develop, and maintain applications that process large datasets, generate reports, and visualize demographic patterns. Responsibilities include coding, testing, and deploying software modules, integrating demographic models, and ensuring data accuracy and security. What you will do: Demographic professional with 3 to 4 years of industry experience, involved in the of Demographic/Data Engineering solutions Develop and maintain software applications for demographic data analysis Collaborate with data scientists and demographers to integrate demographic models Design data processing pipelines to handle large and complex datasets Create visualizations and reports to communicate demographic insights Ensure data accuracy, security, and compliance with privacy regulations Test and debug software to maintain high-quality standards Stay updated with demographic research and technological trends Optimize software performance and scalability Document code and development processes clearly Provide technical support and training to end-users Write clear, compelling, and detailed (technical) user epics and stories with user acceptance criteria. Participate in story grooming exercises for crisp and unambiguous documentation and communication of features to be developed. Collaborate with other team members, also work with cross-functional teams according to requirements. Peer review of code practice needs to be followed. Evaluate, learn, and incorporate new technologies into new and existing frameworks and solutions as applicable. Be agile and embrace change. What we are looking for: 3+ years of industry experience in the areas of Demographic Role Bachelor s or Master s degree in Demography, Geospatial /Geography, Computer Science, Statistics, or related field Proficiency in programming languages such as Python, R Experience with database management and data querying (SQL) Strong understanding of demographic concepts and population data Familiarity with data visualization tools and libraries (e.g., Tableau) Experience in Cloud technologies like AWS, azure etc. Excellent knowledge of database concepts and complex query writing Excellent knowledge in Query Optimization for better performance Exposure to Geo-spatial domain and how geospatial data is stored in database is preferred Ability to communicate with various stakeholders at all levels of the organization. Excellent verbal and written communication skills Excellent interpersonal skills and active listener Able to set and meet time-sensitive goals Able to handle multiple tasks simultaneously and adapt to change while providing structure to operations and go-to-market teams #LI-SR1 The personal data that you provide as a part of this job application will be handled in accordance with relevant laws. For more information about how Precisely handles the personal data of job applicants, please see the Precisely Global Applicant and Candidate Privacy Notice .
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Join us as a Big Data Engineer at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. To be successful as a Big Data Engineer, you should have experience with: - Full Stack Software Development for large-scale, mission-critical applications. - Mastery in distributed big data systems such as Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks like Spring Boot, Spring MVC, JPA, Hibernate. - Proficiency with version control systems, preferably Git; GitHub Copilot experience is a plus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Experience developing back-end applications with multi-process and multi-threaded architectures. - Hands-on experience with building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Experience in DevOps practices like CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Data Product development experience is essential. - Experience in Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. You may be assessed on key critical skills relevant for success in the role, such as risk and controls, change and transformation, business acumen, strategic thinking, digital and technology, as well as job-specific technical skills. This role is for the Pune location. Purpose of the role: To design, develop, and improve software, utilizing various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities: - Development and delivery of high-quality software solutions by using industry-aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. - Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. - Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. - Stay informed of industry technology trends and innovations and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. - Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. - Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations: - Perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. - Requires in-depth technical knowledge and experience in the assigned area of expertise. - Thorough understanding of the underlying principles and concepts within the area of expertise. - Lead and supervise a team, guiding and supporting professional development, allocating work requirements, and coordinating team resources. - If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviors to create an environment for colleagues to thrive and deliver to a consistently excellent standard. - For an individual contributor, develop technical expertise in the work area, acting as an advisor where appropriate. - Will have an impact on the work of related teams within the area. - Partner with other functions and business areas. - Take responsibility for end results of a team's operational processing and activities. - Escalate breaches of policies/procedure appropriately. - Take responsibility for embedding new policies/procedures adopted due to risk mitigation. - Advise and influence decision-making within the own area of expertise. - Take ownership of managing risk and strengthening controls in relation to the work you own or contribute to. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset to Empower, Challenge, and Drive the operating manual for how we behave.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Lead AI/ML Researcher, your primary responsibility will be to spearhead the research, design, and development of cutting-edge AI and ML models that power an innovative AI-driven no-code development platform and a scalable AI inference and training orchestration system. Your role involves building scalable ML pipelines, optimizing models for production, mentoring team members, and translating research innovations into impactful product features that align with business objectives. You will be tasked with designing and implementing state-of-the-art machine learning and deep learning models for Natural Language Processing (NLP), computer vision, and generative AI that are relevant to the field of no-code AI coding and AI orchestration platforms. Additionally, you will develop, optimize, and fine-tune large-scale models, including transformer-based architectures and generative models. It will be crucial for you to architect and oversee end-to-end machine learning pipelines encompassing data processing, training, evaluation, deployment, and continuous monitoring. Collaboration with software engineering teams will be essential to ensure the successful productionization of models, focusing on factors such as reliability, scalability, and performance. Your role will also involve researching and integrating cutting-edge AI techniques and algorithms to uphold product competitiveness. Leading AI research efforts to contribute to intellectual property generation, patents, and academic publications will be a key aspect of your responsibilities. Moreover, you are expected to provide technical leadership and mentorship to junior AI/ML team members and collaborate cross-functionally with product managers, UX designers, and engineers to deliver AI-powered product features. Keeping abreast of AI research trends and technologies and evaluating their applicability, as well as ensuring compliance with data privacy and security standards in AI model development, will be integral to your role. Additionally, possessing experience with AI-driven no-code platforms, familiarity with AI workflow orchestration frameworks, knowledge of probabilistic modeling and uncertainty quantification, hands-on experience with MLOps tools and practices, familiarity with cloud platforms and container orchestration, contributions to open-source AI projects or patent filings, understanding of AI ethics and data privacy compliance, and a strong academic research background with publications in top-tier AI/ML conferences are considered advantageous skills. Qualifications for this role include a PhD in Computer Science, Electrical Engineering, Statistics, Mathematics, or related fields with a specialization in Artificial Intelligence, Machine Learning, or Deep Learning. A strong research publication record in reputable AI/ML conferences, demonstrated experience in developing and deploying deep learning models, proficiency in NLP and/or computer vision, hands-on experience with Python and ML frameworks, experience in building scalable ML pipelines, and knowledge of distributed training, GPU acceleration, and cloud infrastructure are highly desirable. Excellent problem-solving, analytical, and communication skills, along with prior experience in mentoring or leading junior AI researchers/engineers, will be beneficial for this position.,
Posted 1 week ago
0.0 - 1.0 years
0 Lacs
Pune
Work from Office
Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritas enterprise data protection business, the company s solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. We ve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture . Want to join the leader in AI-powered data security? Cohesity offers a web-scale, hybrid cloud infrastructure for data management. We are looking for Software Engineers who are motivated and passionate and willing to enhance Cohesity s products and by working on features, tools, scripts that will make it easy to sell, deploy and maintain. You are not only a Software Engineer who designs and implements features but should have knack of diagnosing problems in large bodies of complex code, understand scalability and performance and work on fixes with rapid turnaround time and high quality. You will be part of our Product and Sustenance Engineering team and willing to work with Product. Managers and more importantly with Customer Support, System Engineers and Customers. HOW YOU LL SPEND YOUR TIME HERE Exposure to enterprise-grade production systems Mentorship from leaders across IT, AI/ML, and support functions Experience working with automation and intelligent workflows Internship certification, performance-based recommendations, and career development support WE D LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING Pursuing a Master s or bachelor s degree in computer science, MCA, IT, or equivalent technical discipline Academic or project experience in Enterprise Development, Data Science, Web Technologies, or AI/ML Strong problem-solving and debugging skills Proactive learning mindset and adaptability Clear communication and stakeholder engagement Team spirit and ownership of deliverables Ability to synthesize technical information for action Data Privacy Notice for Job Candidates: For information on personal data processing, please see our Privacy Policy . Equal Employment Opportunity Employer (EEOE) Cohesity is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status or any other category protected by law. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, or are limited in the ability or unable to access or use this online application process and need an alternative method for applying, you may contact us at 1-855-9COHESITY or talent@cohesity.com for assistance. In-Office Expectations Cohesity employees who are within a reasonable commute (e.g. within a forty-five (45) minute average travel time) work out of our core offices 2-3 days a week of their choosing.
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad, Ahmedabad, Bengaluru
Work from Office
Lead MIS - Customer Experience Function Background: As part of our continued efforts to strengthen operational excellence in the Customer Experience (CX) function at Swiggy, we are looking to onboard a highly skilled MIS professional for the Customer Experience and Care vertical. This role will play a crucial part in streamlining payment processes, automating reporting systems and ensuring timely, accurate vendor payments aligned with internal controls and financial best practices. Key Responsibilities: Process payments on schedule, ensuring compliance with agreed timelines and minimizing delays. Reconcile accounts and resolve discrepancies in invoices or payment records. Maintain accurate financial records and documentation related to billing and payments. Liaise with vendors and internal teams to resolve billing-related queries promptly. Ensure adherence to financial regulations, internal controls, and audit requirements. Support month-end, Quarter-end and year-end closing activities related to accounts payable, accruals and payment dashboards with Finance and internal stakeholders. Generate regular reports and provide insights on payment trends, outstanding invoices, and other key metrics. Thoroughly Validate invoices and billing using internal tools in line with company policies. Identify process gaps and propose data-driven improvements. Drive automation initiatives in billing and payment processes to improve efficiency and reduce manual intervention. Requirements: Bachelors degree in any stream; Accounting, Finance, or related field is a plus. Proven experience in vendor billing, accounts payable, or a similar finance role. Prior experience in automation of financial processes, with hands-on implementation preferred. Proficiency in accounting software such as SAP, QuickBooks, Oracle, or similar. Strong proficiency in MS Excel (advanced formulas, data manipulation, reporting) and SQL (Snowflake or similar). Working knowledge of Python for automation and data processing is highly desirable. Strong attention to detail in identifying anomalies and financial inconsistencies. Good communication and problem-solving skills. Knowledge of tax regulations and financial compliance is an advantage.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
Gurugram
Work from Office
Job Description The role will involve help with setting up the scripting function, working with Research and Project Management stakeholders to define the best practice and establish processes for scripting and data processing within the business, as well as managing actual scripting. This will include: Initial set up of the scripting and data processing function, working with a variety of senior stakeholders to understand requirements and suggest improvements. Implementation of new processes and policies, including preparing the best practice manuals and guides for Research and Project Management staff globally. Supporting a transition from an outsourced model to a blended in-house approach. Line management of Scripting Executives, including training, mentoring and involvement in recruitment and selection. Scripting and programming medium complexity online surveys using the Confirmit platform, incorporating logic, custom routing, and question validation to ensure seamless user experience and data accuracy Conducting rigorous testing of surveys before launch, identifying and resolving any programming or logic errors to ensure data integrity and smooth respondent experience Cleansing and validating data in the relevant software (SPSS or similar) and process survey data for tabulation Managing workloads of the team and delivery across projects Liaising with internal clients, advising on best practice, and assisting with problem solving Skills and Experience: More than 5 years of experience in a data processing and scripting role within a market research agency (ideally healthcare but can be other market research sectors) Proficiency in using Confirmit s scripting platform, including authoring, logic implementation, and deployment Solid understanding of scripting languages such as JavaScript, HTML, and CSS for customizing survey functionality and layout Experience working with survey data formats (e.g., XML, CSV), as well as knowledge of relational databases and data structure management. Understanding of quantitative research methodologies, questionnaire structures, and healthcare market research best practices. Demonstrable experience of working with senior and junior team members Problem-solving skills (particularly relating to situations requiring analytical judgement and establishing best practice solutions) Excellent analytical and numerical skills Strong communication skills, particularly in the ability to explain technical points to non-technical people in an international environment. Dont meet every job requirement? Thats okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If youre excited about this role, but your experience doesnt perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design, develop, and maintain large-scale data processing workflows using big data technologies Develop ETL/ELT pipelines to ingest, clean, transform, and aggregate data from various sources Work with distributed computing frameworks such as Apache Hadoop , Spark , Flink , or Kafka Optimize data pipelines for performance, reliability, and scalability Collaborate with data scientists, analysts, and engineers to support data-driven projects Implement data quality checks and validation mechanisms Monitor and troubleshoot data processing jobs and infrastructure Document data workflows, architecture, and processes for team collaboration and future maintenance
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage
Posted 1 week ago
3.0 - 6.0 years
5 - 8 Lacs
Hyderabad
Work from Office
Description & Requirements Java Developer is responsible for designing and implementing high-quality, reusable Java components and services. The role involves using Spring Boot to implement microservice architectures and integrating them with various databases and data storage solutions, ensuring the performance and scalability of the software in line. Key Responsibilities: Develop reusable and maintainable Java components and services. Implement microservice architecture using Spring Boot. Design REST APIs with a focus on industry standards. Utilize Spark in Java for data processing tasks. Integrate code with databases, both relational (SQL) and NoSQL. Conduct unit testing to ensure functionality meets design specifications. Apply object-oriented programming (OOP) principles effectively. Collaborate with cross-functional teams to translate technical requirements into effective code Required Skills and Qualifications Bachelor s degree in Computer Science, Engineering, or a related field. Minimum of 4 years of experience in Java development. Strong proficiency in core and advanced Java, including the latest features. Experience with Spring Boot and Spark libraries in Java. Knowledge of database integration, both relational and NoSQL. Familiarity with development tools like Git, Docker, and Linux. Strong communication, problem-solving, and teamwork skills.
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Prudential s purpose is to be partners for every life and protectors for every future. Our purpose encourages everything we do by creating a culture in which diversity is celebrated and inclusion assured, for our people, customers, and partners. We provide a platform for our people to do their best work and make an impact to the business, and we support our people s career ambitions. We pledge to make Prudential a place where you can Connect, Grow, and Succeed. Design, build, and maintain data pipelines to ingest data from multiple sources into our cloud data platform. Ensure pipelines are built using defined standards and maintain comprehensive documentation. Adhere to and enforce data governance standards to maintain data integrity and compliance. Implement data quality rules to ensure the accuracy and reliability of data. Implement data security and protection control s around Databricks Unity Catalog Utilize Azure Data Factory, Azure Databricks, and other Azure services to build and optimize data pipelines. Leverage SQL, Python/ PySpark and other programming languages for data processing and transformation. Stay updated with the latest Azure technologies and best practices. Provide technical guidance and support to team members and stakeholders. Maintain detailed documentation of data pipelines, processes, and data quality rules. Debug, fine tune and optimize large scale data processing jobs Generate reports and dashboards to monitor data pipeline performance and data quality metrics. Work collaboratively with data teams across Asia and Africa to understand data requirements and deliver solutions. .
Posted 1 week ago
10.0 - 15.0 years
35 - 40 Lacs
Gurugram
Work from Office
Job Description The role will involve setting up the scripting function, working with Research and Project Management stakeholders to define the best practice and establish processes for scripting and data processing within the business, as well as managing more junior data processing staff and hands-on scripting. This will include: Initial set up of the scripting function, working with a variety of senior stakeholders to understand requirements and suggest improvements. Implementation of new processes and policies, including preparing the best practice manuals and guides for Research and Project Management staff globally. Supporting a transition from an outsourced model to a blended in-house approach. Line management of 1-2 junior Scripting Executives / Managers, including training, mentoring and involvement in recruitment and selection. Scripting and programming complex online surveys using the Confirmit platform, incorporating advanced logic, custom routing, and question validation to ensure seamless user experience and data accuracy Conducting rigorous testing of surveys before launch, identifying and resolving any programming or logic errors to ensure data integrity and smooth respondent experience Cleansing and validating data in the relevant software (SPSS or similar) and process survey data for tabulation Managing workloads of the team and delivery across projects Liaising with internal clients, advising on best practice, and assisting with problem solving. Skills and Experience: More than 10 years of experience in a scripting role within a market research agency (ideally healthcare but can be other market research sectors) Proficiency in using Confirmit s scripting platform, including authoring, logic implementation, and deployment Solid understanding of scripting languages such as JavaScript, HTML, and CSS for customizing survey functionality and layout Experience working with survey data formats (e.g., XML, CSV), as well as knowledge of relational databases and data structure management. Understanding of quantitative research methodologies, questionnaire structures, and healthcare market research best practices. Demonstrable experience of managing a team Problem-solving skills (particularly relating to situations requiring analytical judgement and establishing best practice solutions) Excellent analytical and numerical skills Strong communication skills, particularly in the ability to explain technical points to non-technical people in an international environment. Dont meet every job requirement? Thats okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If youre excited about this role, but your experience doesnt perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others.
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
Mumbai
Work from Office
We are seeking a skilled Data Engineer with strong experience in PySpark, Python, Databricks , and SQL . The ideal candidate will be responsible for designing and developing scalable data pipelines and processing frameworks using Spark technologies. Key Responsibilities: Develop and optimize data pipelines using PySpark and Databricks Implement batch and streaming data processing solutions Collaborate with data scientists, analysts, and business stakeholders to gather data requirements Work with large datasets to perform data transformations Write efficient, maintainable, and well-documented PySpark code Use SQL for data extraction, transformation, and reporting tasks Monitor data workflows and troubleshoot performance issues on Spark platforms Ensure data quality, integrity, and security across systems Required Skills: 2+ years of hands-on experience with Databricks 4+ years of experience with PySpark and Python Strong knowledge of Apache Spark ecosystem and its architecture Proficiency in writing complex SQL queries ( 3+ years ) Experience in handling large-scale data processing and distributed systems Good understanding of data warehousing concepts and ETL pipelines Preferred Qualifications: Experience with cloud platforms like Azure Familiarity with data lakes and data lakehouse architecture Exposure to CI/CD and DevOps practices in data engineering projects is an added advantage
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Prudential s purpose is to be partners for every life and protectors for every future. Our purpose encourages everything we do by creating a culture in which diversity is celebrated and inclusion assured, for our people, customers, and partners. We provide a platform for our people to do their best work and make an impact to the business, and we support our people s career ambitions. We pledge to make Prudential a place where you can Connect, Grow, and Succeed. Lead a team (4-6) of seasoned data engineers. Design, build, and maintain data pipelines to ingest data from multiple sources into our cloud data platform. Ensure pipelines are built using defined standards and maintain comprehensive documentation. Adhere to and enforce data governance standards to maintain data integrity and compliance. Implement data quality rules to ensure the accuracy and reliability of data. Implement data security and protection control s around Databricks Unity Catalog Utilize Azure Data Factory, Azure Databricks, and other Azure services to build and optimize data pipelines. Leverage SQL, Python/ PySpark and other programming languages for data processing and transformation. Stay updated with the latest Azure technologies and best practices. Provide technical guidance and support to team members and stakeholders. Maintain detailed documentation of data pipelines, processes, and data quality rules. Debug, fine tune and optimize large scale data processing jobs Generate reports and dashboards to monitor data pipeline performance and data quality metrics. Work collaboratively with data teams across Asia and Africa to understand data requirements and deliver solutions. .
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at HDI to develop, build, deploy, and troubleshoot software. Qualifications 4+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc Good understanding of versioning tools like Git/SVN.
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
Bengaluru
Work from Office
Why Lytx?: As our Cloud Operations Engineer - Machine Learning you will join our Applied Machine Learning Team who develops machine learning and computer vision algorithms to monitor and assess the state of drivers and their environments to identify risk and improve safety for our clients. You will contribute to all aspects of the development cycle to optimize workflows, dataset generation, model performance and code efficiency to help enhance and differentiate us as the leader in the Video Safety and Telematics industry. If this sounds like you, we encourage you to apply! What Youll Do: Build and maintain cloud deployment of ML models and surrounding infrastructure Contribute to infrastructure and process improvements for data collection, labeling, model development and deployment Design and implement R&D data engineering solutions for delivery of ML model value from device to cloud, including message payload design, data ingest and database architecture Help prepare and automate builds for device model deployment Assist on projects led by other team members via data processing, programming, monitoring of production applications, etc. Other duties as assigned. What Youll Need: Bachelor s degree in Computer Science or equivalent experience 2 to 4 years of experience with a Strong background in MLOps, Python, GNU/Linux CLI Versatile and adaptable engineer who can address evolving needs of team. Strong understanding of data engineering principles and architecture Knowledge of relational database modeling and integration. Experience with NoSQL is helpful. Ability to manage cloud resources and technologies within AWS, including Sagemaker, EC2 and S3 Experience with software automation tools, e.g., Airflow, Ansible, Terraform, Jenkins Experience with automated unit testing and regression testing methodologies Familiar with Linux software build toolchains and patterns. (e.g., Make, gcc) Experience with source control and tracking (git) Strong teammate who enjoys working in a collaborative, fast-paced team-focused environment Innovation Lives Here You go all in no matter what you do, and so do we. At Lytx, we re powered by cutting-edge technology and Happy People. You want your work to make a positive impact in the world, and that s what we do. Join our diverse team of hungry, humble and capable people united to make a difference. Together, we help save lives on our roadways. Find out how good it feels to be a part of an inclusive, collaborative team. We re committed to delivering an environment where everyone feels valued, included and supported to do their best work and share their voices. Lytx, Inc. is proud to be an equal opportunity/affirmative action employer and maintains a drug-free workplace. We re committed to attracting, retaining and maximizing the performance of a diverse and inclusive workforce. EOE/M/F/Disabled/Vet.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39928 Jobs | Dublin
Wipro
19405 Jobs | Bengaluru
Accenture in India
15976 Jobs | Dublin 2
EY
15128 Jobs | London
Uplers
11281 Jobs | Ahmedabad
Amazon
10521 Jobs | Seattle,WA
Oracle
9339 Jobs | Redwood City
IBM
9274 Jobs | Armonk
Accenture services Pvt Ltd
7978 Jobs |
Capgemini
7754 Jobs | Paris,France