Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 8.0 years
1 Lacs
Hyderābād
On-site
Assistant / Deputy Manager Hyderabad B.E./MCA/B.Tech/M.sc (I.T.) 25-35 Experience & Role: Experience – above 5-8 years relevant experience. Role - We are looking for a technically strong and detail-oriented professional to manage and support our Cloudera Data Platform (CDP) ecosystem. The ideal candidate should possess in-depth expertise in distributed data processing frameworks and hands-on experience with core Hadoop components. This role requires both operational excellence and technical depth, with an emphasis on optimizing data processing pipelines and maintaining high system availability. Job Description: - Administer and maintain the Cloudera Data Platform (CDP) across all environments (dev/test/prod) - Strong expertise in Big Data ecosystem like Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. - Develop and optimize complex Hive queries, including the use of analytical functions for reporting and data transformation. - Create custom UDFs in Hive to handle specific business logic and integration needs. - Ensure efficient data ingestion and movement using Sqoop, Nifi, and Oozie workflows. - Work with various data formats (CSV, TSV, Parquet, ORC, JSON, AVRO) and compression techniques (Gzip, Snappy) to maximize performance and storage. - Monitor and tune performance of YARN and Spark applications for optimal resource utilization. - In depth Knowledge on Architecture of Distributed Systems and Parallel Computing. Internal - Good knowledge in Oracle PL/SQL and shell scripting. - Strong problem-solving and analytical thinking. - Effective communication and documentation skills. - Ability to collaborate across multi-disciplinary teams. - Self-driven with the ability to manage multiple priorities under tight timelines. Job Types: Full-time, Permanent Pay: Up to ₹100,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person
Posted 17 hours ago
5.0 years
15 Lacs
India
On-site
Key Responsibilities: Architect, design, and optimize enterprise-grade NiFi data flows for large-scale ingestion, transformation, and routing. Manage Kafka clusters at scale (multi-node, multi-datacenter setups), ensuring high availability, fault tolerance, and maximum throughput. Create custom NiFi processors and develop advanced flow templates and best practices. Handle advanced Kafka configurations — partitioning, replication, producer tuning, consumer optimization, rebalancing, etc. Implement stream processing using Kafka Streams and manage Kafka Connect integrations with external systems (databases, APIs, cloud storage). Design secure pipelines with end-to-end encryption, authentication (SSL/SASL), and RBAC for both NiFi and Kafka. Proactively monitor and troubleshoot performance bottlenecks in real-time streaming environments. Collaborate with infrastructure teams for scaling, backup, and disaster recovery planning for NiFi/Kafka. Mentor junior engineers and enforce best practices for data flow and streaming architectures. Required Skills and Qualifications: 5+ years of hands-on production experience with Apache NiFi and Apache Kafka . Deep understanding of NiFi architecture (flow file repository, provenance, state management, backpressure handling). Mastery over Kafka internals — brokers, producers/consumers, Zookeeper (or KRaft mode), offsets, ISR, topic configurations. Strong experience with Kafka Connect , Kafka Streams , Schema Registry , and data serialization formats (Avro, Protobuf, JSON). Expertise in tuning NiFi and Kafka for ultra-low latency and high throughput . Strong scripting and automation skills (Shell, Python, Groovy, etc.). Experience with monitoring tools : Prometheus, Grafana, Confluent Control Center, NiFi Registry, NiFi Monitoring dashboards. Solid knowledge of security best practices in data streaming (encryption, access control, secret management). Hands-on experience deploying on cloud platforms (AWS MSK, Azure Event Hubs, GCP Pub/Sub with Kafka connectors). Bachelor's or Master’s degree in Computer Science, Data Engineering, or equivalent field. Preferred (Bonus) Skills: Experience with containerization and orchestration (Docker, Kubernetes, Helm). Knowledge of stream processing frameworks like Apache Flink or Spark Streaming. Contributions to open-source NiFi/Kafka projects (a huge plus!). Soft Skills: Analytical thinker with exceptional troubleshooting skills. Ability to architect solutions under tight deadlines. Leadership qualities for guiding and mentoring engineering teams. Excellent communication and documentation skills. pls send your resume on hr@rrmgt.in or call me on 9081819473. Job Type: Full-time Pay: From ₹1,500,000.00 per year Work Location: In person
Posted 17 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less
Posted 17 hours ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary: A highly skilled Big Data (Hadoop) Administrator responsible for the installation, configuration, engineering, and architecture of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, scripting, and infrastructure-as-code for automating and optimizing operations is highly desirable. Experience in collaborating with cross-functional teams, including application development, infrastructure, and operations, is highly preferred. Job Responsibilities: Manages the design, distribution, performance, replication, security, availability, and access requirements for large and complex Big Data clusters. Designs and develops the architecture and configurations to support various application needs; implements backup, recovery, archiving, conversion strategies, and performance tuning; manages job scheduling, application release, cluster changes, and compliance. Identifies and resolves issues utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of Hadoop cluster management; consults and advises application development teams on security, query optimization, and performance. Writes scripts to automate routine cluster management tasks and documents maintenance processing flows per standards. Implement industry best practices while performing Hadoop cluster administration tasks. Works in an Agile model with a strong understanding of Agile concepts. Collaborates with development teams to provide and implement new features. Debugs production issues by analyzing logs directly and using tools like Splunk and Elastic. Address organizational obstacles to enhance processes and workflows. Adopts and learns new technologies based on demand and supports team members by coaching and assisting. Education: Bachelor’s degree in computer science, Information Systems, or another related field with 14+ years of IT and Infrastructure engineering work experience. Experience: 14+ Years Total IT experience & 10+ Years relevant experience in Big Data database Technical Skills: Big Data Platform Management : Big Data Platform Management: Expertise in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Data Infrastructure & Security : Proficient in designing and implementing robust data infrastructure solutions with a strong focus on data security, utilizing tools like Apache Ranger and Kerberos. Performance Tuning & Optimization : Skilled in performance tuning and optimization of big data environments, leveraging advanced techniques to enhance system efficiency and reduce latency. Backup & Recovery : Experienced in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Linux & Troubleshooting : Strong knowledge of Linux operating systems , with proven ability to troubleshoot and resolve complex technical issues, collaborating effectively with cross-functional teams. DevOps & Scripting : Proficient in scripting and automation using tools like Ansible, enabling seamless integration and automation of cluster operations. Experienced in infrastructure-as-code practices and observability tools such as Elastic. Agile & Collaboration : Strong understanding of Agile SAFe for Teams, with the ability to work effectively in Agile environments and collaborate with cross-functional teams. ITSM Process & Tools : Knowledgeable in ITSM processes and tools such as ServiceNow. Other Critical Requirements: Automation and Scripting : Proficiency in automation tools and programming languages such as Ansible and Python to streamline operations and improve efficiency. Analytical and Problem-Solving Skills : Strong analytical and problem-solving abilities to address complex technical challenges in a dynamic enterprise environment. 24x7 Support : Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Team Management and Leadership : Proven experience managing geographically distributed and culturally diverse teams, with strong leadership, coaching, and mentoring skills. Communication Skills : Exceptional written and oral communication skills, with the ability to clearly articulate technical and functional issues, conclusions, and recommendations to stakeholders at all levels. Stakeholder Management : Prior experience in effectively managing both onshore and offshore stakeholders, ensuring alignment and collaboration across teams. Business Presentations : Skilled in creating and delivering impactful business presentations to communicate key insights and recommendations. Collaboration and Independence : Demonstrated ability to work independently as well as collaboratively within a team environment, ensuring successful project delivery in a complex enterprise setting. About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less
Posted 18 hours ago
6.0 - 9.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name : Data Engineer Position Level : Senior Position Details EY’s GDS Assurance Digital team’s mission is to develop, implement and integrate technology solutions that better serve our audit clients and engagement teams. As a member of EY’s core Assurance practice, you’ll develop a deep Audit related technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require audit departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Assurance Digital team members work side-by-side with the firm's partners, clients and audit technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Assurance. GDS Assurance Digital provides solution architecture, application development, testing and maintenance support to the global Assurance service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Big Data Developer to join the GDS Assurance Digital practice in Bangalore, India, to work on various Microsoft technology-based projects for customers across the globe. Qualifications Requirements (including experience, skills and additional qualifications) A Bachelor's degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. BE/BTech/MCA with a sound industry experience of 6 to 9 years. Technical skills requirements: Experience with SQL, NoSQL databases such as HBase/Cassandra/MongoDB Good knowledge of Big Data querying tools, such as Pig, Hive ETL Implementation any tool like Alteryx or Azure Data Factory etc Good to have experience in NiFi Experience in any one of the reporting tool like Power BI/Tableau/Spot fire is must Analytical/Decision Making Responsibilities: An ability to quickly understand complex concepts and use technology to support data modeling, analysis, visualization or process automation Selects appropriately from applicable standards, methods, tools and applications and uses accordingly Ability to work within a multi-disciplinary team structure, but also independently Demonstrates analytical and systematic approach to problem solving Communicates fluently orally and in writing and can present complex technical information to both technical and non-technical audiences Able to plan, schedule and monitor work activities in to meet time and quality targets Able to absorb rapidly new technical information, business acumen, and apply it effectively Ability to work in a team environment with strong customer focus, good listening, negotiation and problem-resolution skills Additional skills requirements: The expectations are that a Senior will be able to maintain long-term client relationships and network and cultivate business development opportunities Should have understanding and experience of software development best practices Must be a team player EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 20 hours ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less
Posted 22 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Current scope and span of work: Summary : Need is for a data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are intending to hire Data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Work Location: Chennai or Hyderabad or Pune WFO. Shift hours: 2.00pm to 11.00pm IST. Required Immediate Joiners. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Kinective Media Data Engineering team designs, develops, and maintains massively scaling ad- technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial projects with a digital focus. Data Engineer will be responsible to partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives that create sustainable revenue and share growth. Execute unit tests and validating expected results to ensure accuracy & integrity of data and applications through analysis, coding, writing clear documentation and problem resolution. This role will also drive the adoption of data processing and analysis within the AWS environment and help cross train other members of the team. Leverage strategic and analytical skills to understand and solve customer and business centric questions. Coordinate and guide cross-functional projects that involve team members across all areas of the enterprise, vendors, external agencies and partners Leverage data from a variety of sources to develop data marts and insights that provide a comprehensive understanding of the business. Develop and implement innovative solutions leading to automation Use of Agile methodologies to manage projects Mentor and train junior engineers. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Qualifications - External Required BS/BA, in computer science or related STEM field 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala 2+ years of experience with Big Data technologies like PySpark, Hadoop, Hive, HBASE, Kafka, Nifi 2+ years of experience with database systems like redshift,MS SQL Server, Oracle, Teradata. Creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights Individuals who have a natural curiosity and desire to solve problems are encouraged to apply 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala Must be legally authorized to work in India for any employer without sponsorship Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field Experience with cloud based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data GGN00002011 Show more Show less
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Overview This position is for Lead Data Engineer in the Commercial Data as a Service group. In this position you will enjoy being responsible for helping define and maintain the data systems key to delivering successful outcomes for our customers. You will be hands on and work closely to guide a team of Data Engineers in the associated data maintenance, integrations, enhancements, loads and transformation processes for the organization. This key individual will work closely with Data Architects to design and implement solutions and insure successful implementations. Role Leads initiatives to build and maintain database technologies, environments, and applications, seeking opportunities for improvements and efficiencies Architects internal data solutions as part of the full stack to include data modelling, integration with file based as well as event driven upstream systems Writes SQL statement procedures to optimize SQL execution and query development Effectively utilizes various tools such as Spark (Scala, Python), Nifi, Spark streaming, Informatica for data ETL, Manages the deployment of data solutions that are optimally standardized and database updates to meet project deliverables Leads database security posture, which includes proactively identifying security risks and implementing both risk mitigation plans and control functions Oversees the resolution of chronic complex problems to prevent future data performance issues Supports process improvement efforts to identify and test opportunities for automation and/or reduction in time to deployment Responsible for complex design (in conjunction with Data Architects), development, and performance and system testing, and provides functional guidance, advice to experienced engineers Mentors junior staff by providing training to develop technical skills and capabilities across the team All about you Experience developing a specialization in a particular functional area (e.g., modeling, data loads, transformations, replication, performance tuning, logical and physical database design, performance and troubleshooting, data replication, backup and recovery, and data security) leveraging Apache Spark, Nifi, Databricks, Snowflake, Informatica, streaming solutions. Experience leading a major work stream or multiple smaller work streams for a large domain initiative, often providing technical guidance and advice to project team members Experience creating deliverables within the global database technology domains and sub-domains, supporting cross-functional leaders in the technical community to derive new solutions Experience supporting automation and/or cloud delivery effort; may perform financial and cost analysis Experience in database architecture or other relevant IT experience Experience in leading business system application and database architecture design, influencing technology direction in range of breadth of IT areas Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Job Title: Java Full stack Developer Location: Bangalore Candidate Expectation Candidate should have minimum 5 years of experience developing Java applications. Candidate should have experience with popular React.js workflows (such as Redux) Job Description Develop and maintain high-quality, efficient, and scalable backend applications. Participate in all phases of the software development lifecycle (SDLC) Debug and troubleshoot complex technical problems. Identify and implement performance optimizations. Required React Developer Requirements, Qualifications & Skills Proficiency in React.js and its core principles Strong JavaScript, HTML5, and CSS3 skills Knowledge about at least one messaging system like Kafka , Apache Nifi etc. Strong understanding of object-oriented programming (OOP) principles. Proficient in unit testing frameworks (e.g., JUnit). Experience with build automation tools (e.g., Maven, Gradle). Experience with version control systems (e.g., Git). Experience with one of these databases – Postgres, MongoDb, Cassandra Experienced in containerized deployments using Docker, Kubernetes and DevOps mindset Create automated tests for unit, integration, regression, performance, and functional testing, to meet established expectations and acceptance criteria. Document APIs using Lowe’s established tooling. Skills Required RoleJava Full stack Developer - Bangalore Industry TypeITES/BPO/KPO Functional Area Required Education B. Tech. Employment TypeFull Time, Permanent Key Skills JAVA JAVA + MICROSERVICES JAVA DEVELOPER JAVA FULL STACK DEVELOPER Other Information Job CodeGO/JC/312/2025 Recruiter NameDevikala D Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications Data Architecture Design: Develop and implement scalable and efficient data architectures for batch and real-time data processing.Design and optimize data lakes, warehouses, and marts to support analytical and operational use cases. ETL/ELT Pipelines: Build and maintain robust ETL/ELT pipelines to extract, transform, and load data from diverse sources.Ensure pipelines are highly performant, secure, and resilient to handle large volumes of structured and semi-structured data. Data Quality and Governance: Establish data quality checks, monitoring systems, and governance practices to ensure the integrity, consistency, and security of data assets. Implement data cataloging and lineage tracking for enterprise-wide data transparency. Collaboration with Teams:Work closely with data scientists and analysts to provide accessible, well-structured datasets for model development and reporting. Partner with software engineering teams to integrate data pipelines into applications and services. Cloud Data Solutions: Architect and deploy cloud-based data solutions using platforms like AWS, Azure, or Google Cloud, leveraging services such as S3, BigQuery, Redshift, or Snowflake. Optimize cloud infrastructure costs while maintaining high performance. Data Automation and Workflow Orchestration: Utilize tools like Apache Airflow, n8n, or similar platforms to automate workflows and schedule recurring data jobs. Develop monitoring systems to proactively detect and resolve pipeline failures. Innovation and Leadership: Research and implement emerging data technologies and methodologies to improve team productivity and system efficiency. Mentor junior engineers, fostering a culture of excellence and innovation.| Required Skills: Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists. Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Responsibilities Architect and build complex, scalable, and high-performance backend systems. Owning end-to-end development and delivery of features and enhancements. Designing and optimizing large-scale distributed data processing using Apache Spark and Nifi. Collaborating with product managers, architects, and other stakeholders to align on technical vision. Leading code reviews and ensuring adherence to best practices and high code quality. Mentoring and guiding junior and mid-level engineers within the : Requirements 10+ years of experience in backend development, with deep expertise in Java and Spring Boot. Proficient in Data Structures and Algorithms, with strong problem-solving skills. Advanced experience with Apache Spark, Apache Nifi, and distributed systems. Proven ability to make architectural decisions and drive technical strategy. Solid understanding of system design, scalability, and performance tuning. Experience with agile methodologies, DevOps practices, and CI/CD tools. Excellent leadership, communication, and stakeholder management skills (ref:hirist.tech) Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Senior Cloud Data Developer We are seeking an exceptional Cloud Data Developer who can bridge the gap between data engineering and cloud-native application development. This role combines strong programming skills with data engineering expertise to build and maintain scalable data solutions in the cloud. Position Overview: Work with cutting-edge cloud technologies to develop data-intensive applications, create efficient data pipelines, and build robust data processing systems using AWS services and modern development practices. Core Responsibilities: Design and develop data-centric applications using Java Spring Boot and AWS services Create and maintain scalable ETL pipelines using AWS EMR and Apache NiFi Implement data workflows and orchestration using AWS MWAA (Managed Workflows for Apache Airflow) Build real-time data processing solutions using AWS SNS/SQS and AWS Pipes Develop and optimize data storage solutions using AWS Aurora and S3 Manage data discovery and metadata using AWS Glue Data Catalog Create search and analytics solutions using AWS OpenSearch Service Design and implement event-driven architectures for data processing Technical Requirements: Primary Skills: Strong proficiency in Java and Spring Boot framework Extensive experience with AWS data services: AWS EMR for large-scale data processing AWS Glue Data Catalog for metadata management AWS OpenSearch Service for search and analytics AWS Aurora for relational databases AWS S3 for data lake implementation Expertise in data pipeline development using: Apache NiFi AWS MWAA AWS Pipes AWS SNS/SQS Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Saras Analytics: We are an ecommerce focused end to end data analytics firm assisting enterprises & brands in data driven decision making to maximize business value. Our suite of work spans extraction, transformation, visualization & analysis of data delivered via industry leading products, solutions & services. Our flagship product is Daton, an ETL tool. We have now ventured into building exciting ease of use data visualization solutions on top of Daton. And lastly, we have a world class data team which understands the story the numbers are telling and articulates the same to CXOs thereby creating value. Where we are Today: We are a boot strapped, profitable & fast growing (2x y-o-y) startup with old school value systems. We play in a very exciting space which is intersection of data analytics & ecommerce both of which are game changers. Today, the global economy faces headwinds forcing companies to downsize, outsource & offshore creating strong tail winds for us. We are an employee first company valuing talent & encouraging talent and live by those values at all stages of our work without comprising on the value we create for our customers. We strive to make Saras a career and not a job for talented folks who have chosen to work with us. The Role: We are seeking an accomplished Lead Data Engineer with strong programming skills, cloud expertise, and in-depth knowledge of Big Query/Snowflake data warehousing technologies. As a key leader in our data engineering team, you will play a critical role in designing, implementing, and optimizing data pipelines, leveraging your expertise in programming, cloud platforms, and modern data warehousing solutions. Responsibilities: Data Pipeline Architecture: Lead the design and architecture of scalable and efficient data pipelines, ensuring optimal performance and reliability. Programming and Scripting: Utilize strong programming skills, particularly in languages like Python, for developing robust and maintainable data engineering solutions. Cloud Platform Expertise: Apply extensive experience with cloud platforms (e.g., AWS, Azure, Google Cloud) to design, deploy, and optimize data engineering solutions in a cloud environment. BigQuery/Snowflake Knowledge: Demonstrate deep understanding and hands-on experience with BigQuery/Snowflake for efficient data storage, processing, and analysis. ETL Processes: Lead the development of Extract, Transform, Load (ETL) processes, ensuring seamless integration of data from various sources into the data warehouse. Data Modeling and Optimization: Design and implement effective data models to support ETL processes and ensure data integrity and efficiency. Collaboration and Leadership: Collaborate with cross-functional teams, providing technical leadership and guidance to junior data engineers. Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions. Quality Assurance: Implement comprehensive data quality checks and validation processes to ensure the accuracy and completeness of data. Documentation: Create and maintain detailed documentation for data engineering processes, data models, and cloud configurations. Technical Skills: Programming Languages: Expertise in programming languages, with a strong emphasis on Python. Cloud Platforms: Extensive experience with cloud platforms such as AWS, Azure, or Google Cloud. Big Data Technologies: Proficiency in big data technologies and frameworks for distributed computing. Data Warehousing: In-depth knowledge of modern data warehousing solutions, with specific expertise in BigQuery/Snowflake. ETL Tools: Experience with ETL tools like Apache NiFi, Talend, or similar. SQL: Strong proficiency in writing and optimizing SQL queries for data extraction, transformation, and loading. Collaboration Tools: Experience using collaboration and project management tools for effective communication and project tracking. Soft Skills: Strong leadership and mentoring capabilities. Excellent communication and presentation skills. Strategic thinking and problem-solving abilities. Ability to work collaboratively in a cross-functional team environment. Educational Qualifications: Bachelor’s or Master’s degree in Computer Science Data Engineering, or a related field. Experience: 8+ years of experience in data engineering roles with a focus on programming, cloud platforms, and data warehousing. If you are an experienced Lead Data Engineer with a strong programming background, cloud expertise, and specific knowledge of BigQuery/Snowflake, we encourage you to apply. Please submit your resume and a cover letter highlighting your technical skills, leadership experience, and contributions to data engineering projects. Show more Show less
Posted 4 days ago
3.0 years
15 - 20 Lacs
Gurgaon
On-site
Profile - Sr Data Engineer JOB DESCRIPTION - Experience: 3 + Years Office Location: Phase IV, Udyog Vihar, Sector 18, Gurugram Working Location: HYDERABAD / GURGAON Interview Mode: F2F Work Mode: Hybrid Job Summary: The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning Role and responsibilities: Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements: CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications: B.Tech /M.Tech /MS or BCA/MCA degree from a reputed university Job Type: Full-time Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Schedule: Day shift Work Location: In person
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2