Jobs
Interviews

17230 Spark Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Description AI/ML Engineer Requirements Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) Job responsibilities Full-stack AI Engineer Must Have: Programming language – Python, Java/Scala Must have : Experience with data processing libraries like Pandas, NumPy, and Scikit-learn Must have : Proficient in distributed computing platform Apache Spark (PySpark, Scala), Torch etc. Must have: Proficiency in API development with Fast API, Spring boot, Understanding of O&M – logging, monitoring, fault management, security etc Good-to-have : Handson experience with deployment & orchestration tools – Docker, Kubernetes, Helm Good-to-have : Experience with cloud platforms (AWS -Sagemaker/ Bedrock, GCP, or Azure) Good-to-have : Strong programming skills in TensorFlow, PyTorch, or similar ML frameworks (training and deployment) What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Greater Chennai Area

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various stakeholders to gather requirements, overseeing the development process, and ensuring that the applications meet the specified needs. You will also engage in problem-solving discussions, providing guidance and support to your team while ensuring that project timelines and quality standards are met. Your role will be pivotal in driving the success of application projects and fostering a collaborative environment within the team. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Facilitate knowledge sharing sessions to enhance team capabilities. - Mentor junior team members to support their professional growth. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of distributed computing principles. - Experience with data processing frameworks and tools. - Familiarity with cloud platforms and services. - Ability to optimize application performance and scalability. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Hyderabad office. - A 15 years full time education is required.

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

At Citi we’re not just building technology, we’re building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with laser focused on evolving the ways of doing things. As one of the world’s most global banks we’re changing how the world does business Shape your Career with Citi We’re currently looking for a high caliber professional to join our team as 25883567 Officer- ETL Automation tester -QA - C10 -Hybrid- PUNE based in Pune/Chennai, India. Being part of our team means that we’ll provide you with the resources to meet your unique needs, empower you to make healthy decision and manage your financial well-being to help plan for your future. For instance: We provide programs and services for your physical and mental well-being including access to telehealth options, health advocates, confidential counseling and more. Coverage varies by country. We empower our employees to manage their financial well-being and help them plan for the future. We provide access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. The Testing Analyst is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Candidate is expected to Build Data Pipelines: Extract data from various sources (like databases and data lakes), clean and transform it, and load it into target systems Testing and Validation: Develop automated tests to ensure the data pipelines are working correctly and the data is accurate. This is like quality control, making sure everything meets the bank's standards Work with Hive, HDFS, and Oracle data sources to extract, transform, and load large-scale datasets Leverage AWS services such as S3, Lambda, and Airflow for data ingestion, event-driven processing, and orchestration Create reusable frameworks, libraries, and templates to accelerate automation and testing of ETL jobs Participate in code reviews, CI/CD pipelines , and maintain best practices in Spark and cloud-native development Ensures tooling can be run in CICD providing real-time on demand test execution shortening the feedback loop to fully support Handsfree execution Regression , Integration, Sanity testing, Regression automated suites, reports issues – provide solutions and ensures timely completion Own and drive automation in Data and Analytics Team to achieve 90% automation in Data, ETL space. Design and develop integrated portal to consolidate utilities and cater to user needs. Supports initiatives related to automation on Data & Analytics testing requirements for process and product rollout into production. Specialists who can work with technology team to design and implement appropriate automation scripts/plans for an application testing, meeting required KPI and automation effectiveness. Ensures new utilities are documented and transitioned to testers for execution and supports for troubleshooting in case required. Monitors and reviews code check-ins from peers and helps maintain project repository. Ability to work independently as well as collaborate within groups on various projects assigned. Ability to work in a fast-paced, dynamic environment and manage multiple priorities effectively. Experience and understanding of Wealth domain specifically in private bank(banking) , lending services and related Tech applications.Supports and contributes to automated test data generation and sufficiency. Successful candidate ideally would have following skills and exposure: 2 - 4 years of experience on automation testing across UI Experience in Automation ETL Testing , testing by using SQL queries. Hands on experience on Selenium BDD Cucumber using Java, Python Extensive knowledge on developing and maintaining automation frameworks, AI/ ML related solutions. Experience on automating BI reports e.g., Tableau dashboards and views validation. Data analytics and BI reports in the Financial Service industry Hands on experience in Python for developing utilities for Data Analysis using Pandas, NumPy etc. Exposure and some experience on AI related solutions, ML which can help automate faster. Experience with mobile testing using perfecto, API Testing-SoapUI, Postman/Rest Assured will be added advantage. Detailed knowledge data flows in relational database and Bigdata systems Strong knowledge of Oracle SQL and HiveQL and understanding of ETL/Data Testing. Experience with CI/CD tools like Jenkins. Proficiency in working on Cloudera Hadoop ecosystem (HDFS, Hive, YARN) Hands-on experience with ETL automation and validation framework. Solid understanding of AWS services like S3, Lambda, EKS, Airflow, and Strong problem-solving and debugging skills Excellent communication and collaboration abilities to lead and mentor a large techno-functional team across different geographical locations Strong Acumen and presentation skills. Able to work in an Agile environment and deliver results independently Education: Bachelor’s/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Technology Quality ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

10.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is your Marketing Decision Intelligence platform. Unify, transform, analyze, and visualize all your data in a single, cost-effective AI-powered hub. Gain speed to value by leaving data wrangling, model building, data visualization, and proactive problem solving to MADTECH.AI. Sharper insights, smarter decisions, faster. MADTECH.AI was spun out of well-established Inc. 5000 consultancy iSOCRATES® which advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies and processes as the Global Leader in MADTECH Resource Planning and Execution™ serving marketers, agencies, publishers, and their data/tech suppliers. Job Description We are currently seeking an experienced Manager, Data Science, to lead our growing Data Science team. The role involves overseeing the development and implementation of advanced data science techniques to improve media campaigns and enhance our AI-powered solutions. The manager will collaborate with cross-functional teams, providing leadership in analyzing and defining audience, campaign, and media trading data. Key Responsibilities Team Leadership & Management: Lead and mentor a team of data scientists, providing guidance in the design, development, and implementation of innovative data solutions. Foster a collaborative and high-performance team culture, ensuring the team is aligned with business goals and technical objectives. Advanced Analytics & Data Science Expertise: Drive the application of statistical, econometric, and Big Data methods to define business requirements, design analytics solutions, and optimize economic outcomes. Utilize advanced modeling techniques, including propensity modeling, Marketing Mix Modeling (MMM), Multi-Touch Attribution (MTA), and Bayesian statistics to enhance campaign effectiveness. Generative AI & NLP Leadership: Lead the implementation and development of Generative AI(GenAI), Large Language Models(LLM), and Natural Language Processing (NLP) techniques for data modeling and predictive analysis. Ensure the integration of AI-driven technologies to improve data science capabilities and results. Data Architecture & Management: Architect and manage data systems, integrating data from diverse sources, ensuring the optimization of audience, pricing, and contextual data for ad-tech applications. Oversee the management and utilization of DSPs, SSPs, DMPs, and other critical systems in the ad-tech ecosystem. Cross-Functional Collaboration: Work closely with teams from Product, System Development, Yield, Operations, Finance, Sales, and Business Development to ensure seamless data quality and predictive outcomes across campaigns. Design and deliver actionable insights and reporting tools for both internal and external business partners. Predictive Modeling & Optimization: Lead the development of predictive models to optimize media campaigns, focusing on revenue, audience behavior, bid actions, and ad inventory optimization. Analyze campaign performance and provide data-driven recommendations for optimization across multiple media channels, including websites, mobile apps, and social media. Data Collection & Quality Assurance: Oversee the collection, management, and quality assurance of data, ensuring high standards and efficient systems for in-depth analysis and reporting. Lead the development of tools and methodologies for complex data analysis, model development, and visualization to support business objectives. Qualifications & Skills Master’s or Ph.D. in Statistics, Engineering, Science, or Business, with a strong foundation in mathematics and statistics. 10 to 15 years of experience in data science, predictive analytics, and digital analytics, with at least 7 years of hands-on experience in modeling, analysis, and optimization within the media, advertising, or tech industry. At least 6 years of hands-on experience with Generative AI, Large Language Models, and Natural Language Processing techniques. Strong proficiency in data collection, machine learning, and deep learning techniques using tools such as Python, R, Pandas, scikit-learn, Hadoop, Spark, MySQL, SQL and AWS S3. Experience working with DSPs, SSPs, DMPs, and other programmatic systems in digital advertising. Expertise in statistical modeling, customer segmentation, persona building, and predictive analytics. Advanced understanding of programmatic media optimization, audience behavior, and pricing strategies. Strong problem-solving skills with the ability to adapt to evolving business needs and deliver solutions proactively. Experience in designing analytics dashboards, visualization tools, and reporting systems. Excellent communication and presentation skills, with the ability to explain complex technical concepts to non-technical stakeholders. Ability to manage multiple tasks and projects effectively, both independently and in collaboration with remote teams. An interest in working in a fast-paced, dynamic environment, focused on revenue and analytics in the digital media space. Relocation to Mysuru or Bengaluru required.

Posted 1 day ago

Apply

100.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About H.E. Services: At H.E. Services vibrant tech Center in Hyderabad, you will have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions. Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and up fitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. Roles & Responsibilities: Design, develop, and maintain data pipelines using Databricks , Spark , and other Azure cloud technologies. Optimize data pipelines for performance, scalability, and reliability, ensuring high speed and availability of data warehouse performance. Develop and maintain ETL processes using Databricks and Azure Data Factory for real-time or trigger-based data replication. Ensure data quality and integrity throughout the data lifecycle, implementing new data validation methods and analysis tools. Collaborate with data scientists, analysts, and stakeholders to understand and meet their data needs. Troubleshoot and resolve data-related issues, providing root cause analysis and recommendations. Manage a centralized data warehouse in Azure SQL to create a single source of truth for organizational data, ensuring compliance with data governance and security policies. Document data pipeline specifications, requirements, and enhancements, effectively communicating with the team and management. Leverage AI/ML capabilities to create innovative data science products. Champion and maintain testing suites, code reviews, and CI/CD processes. Must Have: Strong knowledge of Databricks architecture and tools. Proficient in SQL , Python , and PySpark for querying databases and data processing. Experience with Azure Data Lake Storage (ADLS) , Blob Storage , and Azure SQL . Deep understanding of distributed computing and Spark for data processing. Experience with data integration and ETL tools, including Azure Data Factory. Advanced-level knowledge and practice of: Data warehouse and data lake concepts and architectures. Optimizing performance of databases and servers. Managing infrastructure for storage and compute resources. Writing unit tests and scripts. Git, GitHub, and CI/CD practices. Good to Have: Experience with big data technologies, such as Kafka , Hadoop , and Hive . Familiarity with Azure Databricks Medallion Architecture with DLT and Iceberg. Experience with semantic layers and reporting tools like Power BI . Relevant Work Experience: 5+ years of experience as a Data Engineer, ETL Developer, or similar role, with a focus on Databricks and Spark. Experience working on internal, business-facing teams. Familiarity with agile development environments. Education and Training: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience.

Posted 1 day ago

Apply

100.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

A legacy of excellence, driving innovation and personalized service to create exceptional customer experiences. About H.E. Services: At H.E. Services vibrant tech center in Hyderabad, you’ll have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and upfitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. The Business Intelligence Developer II will be responsible for designing, developing, and maintaining advanced data solutions. This role involves creating pipelines in Databricks for Silver (curated) and Gold (aggregated, high-value) layers of data, developing insightful dashboards in Power BI , and applying Machine Learning ( ML ) and Artificial Intelligence ( AI ) techniques to solve complex business problems Roles & Responsibilities: Develop and maintain data pipelines in Databricks for Silver and Gold layers, ensuring data quality and reliability. Optimize data workflows to handle large volumes of structured and unstructured data efficiently. Design and optimize Power BI semantic models, including creating star schemas, managing table relationships, and defining DAX measures to support robust reporting solutions. Create, enhance, and maintain interactive dashboards and reports in Power BI to provide actionable insights to stakeholders. Collaborate with business units to gather requirements and ensure dashboards meet user needs. Use Databricks and other platforms to build and operationalize ML/AI models to enhance decision-making. Work closely with data engineers, analysts, and business stakeholders to deliver scalable and innovative data solutions. Participate in code reviews, ensure best practices, and contribute to a culture of continuous improvement. Relevant Work Experience: 3-5 years of experience in business intelligence, data engineering, or a related role. Proficiency in Databricks (Spark, PySpark) for data processing and transformation. Strong expertise in Power BI for semantic model management, dashboarding and visualization. Experience building and deploying ML/AI models in Databricks or similar platforms. Must Technical Skills : Proficiency in SQL and Python. Solid understanding of ETL/ELT pipelines and data warehousing concepts. Familiarity with cloud platforms (e.g., Azure, AWS) and tools like Delta Lake. Git, GitHub, and CI/CD practices. Excellent problem-solving and analytical skills. Strong communication skills, with the ability to translate complex technical concepts into business-friendly language. Proven ability to work both independently and collaboratively in a fast-paced environment. Preferred Qualifications: Certifications in Power BI, Databricks, or cloud platforms. Experience with advanced analytics tools (e.g., TensorFlow, Scikit-learn, AutoML). Exposure to Agile methodologies and DevOps practices.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title: Data Engineer Location: Hyderabad, India (Onsite) Fulltime. Job Description: We are seeking an experienced Data Engineer with 5-8 years of professional experience to design, build, and optimize robust and scalable data pipelines for our SmartFM platform. The ideal candidate will be instrumental in ingesting, transforming, and managing vast amounts of operational data from various building devices, ensuring high data quality and availability for analytics and AI/ML applications. This role is critical in enabling our platform to generate actionable insights, alerts, and recommendations for optimizing facility operations. ROLES AND RESPONSIBILITIES • Design, develop, and maintain scalable and efficient data ingestion pipelines from diverse sources (e.g., IoT devices, sensors, existing systems) using technologies like IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Kafka. • Implement robust data transformation and processing logic to clean, enrich, and structure raw data into formats suitable for analysis and machine learning models. • Manage and optimize data storage solutions, primarily within MongoDB, ensuring efficient schema design, data indexing, and query performance for large datasets. • Collaborate closely with Data Scientists to understand their data needs, provide high-quality, reliable datasets, and assist in deploying data-driven solutions. • Ensure data quality, consistency, and integrity across all data pipelines and storage systems, implementing monitoring and alerting mechanisms for data anomalies. • Work with cross-functional teams (Software Engineers, Data Scientists, Product Managers) to integrate data solutions with the React frontend and Node.js backend applications. • Contribute to the continuous improvement of data architecture, tooling, and best practices, advocating for scalable and maintainable data solutions. • Troubleshoot and resolve complex data-related issues, optimizing pipeline performance and ensuring data availability. • Stay updated with emerging data engineering technologies and trends, evaluating and recommending new tools and approaches to enhance our data capabilities. REQUIRED TECHNICAL SKILLS AND EXPERIENCE • 5-8 years of professional experience in Data Engineering or a related field. • Proven hands-on experience with data pipeline tools such as IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Apache Kafka. • Strong expertise in database management, particularly with MongoDB, including schema design, data ingestion pipelines, and data aggregation. • Proficiency in at least one programming language commonly used in data engineering, such as Python or Java/Scala. • Experience with big data technologies and distributed processing frameworks (e.g., Apache Spark, Hadoop) is highly desirable. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their data services. • Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. • Experience with DevOps practices for data pipelines (CI/CD, monitoring, logging). • Knowledge of Node.js and React environments to facilitate seamless integration with existing applications. ADDITIONAL QUALIFICATIONS • Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. • Strong problem-solving and analytical skills with a meticulous approach to data quality. • Experienced in collaborating and communicating seamlessly with diverse technology roles, including development, support, and product management. • Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in data engineering and business needs. • Experience in the facility management domain or IoT data is a plus. EDUCATION REQUIREMENTS / EXPERIENCE • Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Mathematics, Statistics, or a related quantitative field.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Customer is seeking skilled and motivated professionals to join our project team supporting Customer across multiple data and AI product domains. The candidates will be part of a dynamic, cloud-native data engineering and analytics environment, working alongside domain leads to support global initiatives. Key Responsibilities: Work on the development and enhancement of data products using modern cloud-based technologies. Collaborate with Customer domain leads and stakeholders to translate requirements into scalable data solutions. Build and maintain data pipelines using AWS and Azure cloud platforms. Support integration with Snowflake-based data warehouses. Ensure solutions are well-documented, scalable, and aligned with enterprise architecture principles. Participate in discussions related to data architecture, best practices, and governance. Technology Stack: Cloud Platforms: AWS (Primary), Azure (Secondary) Data Platforms: Snowflake Languages & Tools: Python, SQL, Spark (preferred), Terraform (optional) Other Tools: Git, CI/CD tools, JIRA, Confluence Required Skills & Experience: 3–8 years of experience in data engineering, analytics engineering, or cloud-native solution delivery. Strong experience in building data pipelines in AWS and/or Azure environments. Hands-on experience with Snowflake is essential. Ability to work with distributed teams and collaborate directly with stakeholders. Strong problem-solving, communication, and documentation skills.

Posted 1 day ago

Apply

14.0 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Senior Engineering Manager on Twilio’s Traffic Intelligence team. About The Job This position is needed to manage the team of machine learning engineers of the Growth & User Intelligence team and closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build ML and Data Science products that work at a global scale and own end-to-end execution of large scale ML solutions. As a senior manager, you will closely partner with technology and product leaders in the organization to enable the engineers to turn ideas into reality. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions for Traffic Intelligence vertical. Be a champion for your team, setting individuals up for success and putting others’ growth first. Understand the architecture and processes required to build and operate always-available complex and scalable distributed systems in cloud environments. Advocate agile processes, continuous integration and test automation. Be a strategic problem solver and thrive operating in broad scope, from conception through continuous operation of 24x7 services. Exhibit strong communication skills: in person, or on paper. You can explain technical concepts to product managers, architects, other engineers, and support. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required You have a minimum of 14+ years experience with 5 years of proven track record of leading and managing software teams. Experience managing multiple workstreams within the team Bachelor’s or Master’s degree in Computer Science, Engineering or related field. Technical Experience with: Applied ML models with proficiency in Python Experience in modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience in Cloud technologies like AWS, GCP etc. Experience in ML frameworks like PyTorch, TensorFlow, or Keras etc. SaaS Telemetry and Observability tools such as Datadog, Graphana etc. Excellent problem solving, critical thinking, and communication skills. Broad knowledge of development environments and tools used to implement and build code for deployment. Have strong familiarity with agile processes, continuous integration, and a strong belief in automation over toil. As a pragmatist, you are able to distill complex and ambiguous situations into actionable plans for your team. Owned and operated services end-to-end, from requirements gathering and design, to debugging and testing, to release management and operational monitoring. Desired Experience with Large Language Models Experience designing and implementing highly scalable and performant ML models. Location This role will be remote, and based in India(Karnataka, Tamil Nadu, Telangana, Maharashtra & New Delhi) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description As the senior data scientist, your role involves spearheading the development and execution of data-driven solutions for clients. Collaborating closely with clients, you will adeptly grasp their business needs, translating them into an AI/ML framework. Your expertise will be pivotal in designing models and selecting suitable techniques to address the client's specific challenges. Responsible for the entire data science project lifecycle, your duties extend from comprehensive data collection to meticulous model development, deployment, maintenance, and optimization. Your focus will particularly centre on crafting machine learning and deep learning models customized for retail and customer analytics, incorporating champion-challenger models to enhance performance. Effective communication with senior stakeholders is imperative in this role, and your proficiency in Python coding will be crucial for seamless end-to-end model development. As the lead data scientist, you will play a key role in driving innovative solutions that align with client objectives and industry best practices. You should possess good communication and project management skills and can communicate effectively with a wide range of audiences, both technical and business. You would be responsible for creating Presentations, reports etc to present the analysis findings to the end clients/stakeholders. Should possess the ability to confidently socialize business recommendations and enable customer organization to implement such recommendations. You must familiar and implement with a range of models including regression, classification, clustering, decision tree, random forest, support vector machine, naïve Bayes, GBM, XGBoost, multiple linear regression, logistic regression, and ARIMA/ARIMAX. You should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models Qualifications:Python for Data Science (mandatory), Good proficiency in end to end coding which includes deployment experience. Experience processing large data. Min. 3 years exp in Retail domain Preferred skills include proficiency in SQL, Spark, Excel, Azure, AWS, GCP, Power BI, and Flask. Preferred experience in areas such as time series analysis, market mix modelling, attribution modelling, churn modelling, market basket analysis, etc.Possess a strong understanding of mathematics with logical thinking abilities.Excellent communication skills are a must. Qualifications BTech/Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1-2 institutions Or BE/B-Tech, MCA or MBARelevant Experience:8+ years of hands on experience in delivering Data Science/Analytics projects.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 4+ years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description 3 to 5 years of experience in data engineering with Azure cloud services. Strong expertise in Azure Data Factory (ADF) for pipeline orchestration. Hands-on experience with Azure Event Hub for real-time data streaming. Proficient in Python (PySpark, Pandas, scripting) and SQL for data processing Extensive experience with Azure Databricks (Spark, Delta Lake) and Dbt Experience with CI/CD, Git, and Infrastructure as Code (IaC)Familiarity with Snowflake Exposure to Power BI Knowledge of SQL, NoSQL, and data warehousing concepts. Strong problem-solving and debugging skills. Qualifications Graduate/Post Graduate

Posted 1 day ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Engineer 📍 Location: Gurugram, India 🕒 Experience: 6–8 years 🧑 💻 Employment Type: Full-time Key Responsibilities Design, build, and optimize scalable data pipelines to support advanced Media Mix Modeling (MMM) and Multi-Touch Attribution (MTA) models. Collaborate with Data Scientists to prepare data for training, validation, and deployment of machine learning models and statistical algorithms. Ingest and transform large volumes of structured and unstructured data from multiple sources, ensuring data quality and integrity. Partner with cross-functional teams (AdSales, Analytics, and Product) to deliver reliable data solutions that drive marketing effectiveness and campaign performance. Automate data workflows and build reusable components for model deployment, data validation, and reporting. Support data scientists with efficient access to cleaned and transformed data, optimizing for both performance and usability. Contribute to the design of a unified data architecture supporting AdTech, OTT, and digital media ecosystems . Stay updated with the latest trends in data engineering, AI-driven analytics, and cloud-native tools to improve data delivery and model deployment processes. Required Skills & Experience 6+ years of hands-on experience in Data Engineering , data analytics, or related roles. At least 3 years working in AdTech , AdSales , or digital media analytics environments. Experience supporting MMM and MTA modeling efforts with high-quality, production-ready data pipelines. Proficiency in Python , SQL , and data transformation tools; experience with R is a plus. Strong knowledge of data modeling , ETL pipelines , and handling large-scale datasets using distributed systems (e.g., Spark, AWS, or GCP). Familiarity with cloud platforms (AWS, Azure, or GCP) and data services (S3, Redshift, BigQuery, Snowflake, etc.). Experience with BI tools such as Tableau, Power BI, or Looker for report automation and insight generation. Solid understanding of statistical techniques , A/B testing , and model evaluation metrics. Excellent communication and collaboration skills to work with both technical and non-technical stakeholders. Preferred Qualifications Experience in media or OTT data environments. Exposure to machine learning model deployment , model monitoring, and MLOps practices. Knowledge of Kafka , Airflow , or dbt for orchestration and transformation.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Position : Senior Technical Leader - Backend Location : Mumbai, India [Thane] Team : Engineering Experience : 12+ Years 🚀 Are you a seasoned technical leader looking to drive engineering excellence at scale? At Netcore Cloud, we’re seeking a Senior Technical Leader who brings deep technical expertise, a track record of designing scalable systems, and a passion for innovation. This is a high-impact role where you will lead the architecture and design of mission-critical systems that power user engagement for thousands of global brands. 🛠️ What You’ll Do Architect highly available, scalable, and fault-tolerant backend systems handling billions of events and terabytes of data. Design real-time campaign processing engines capable of delivering 10 million+ messages per minute. Lead development of complex analytics frameworks including cohort analysis, funnel tracking, and user behavior modeling. Drive architecture decisions on distributed systems, microservices, and cloud-native platforms. Define technical roadmaps and work closely with engineering teams to ensure alignment and execution. Collaborate across product, engineering, DevOps, and data teams to deliver business-critical functionality. Mentor engineers and contribute to engineering excellence through code and design reviews, best practice evangelism, and training. Evaluate and implement tools and frameworks for continuous improvement in scalability, performance, and observability. 🧠 What You Bring 12+ years of hands-on experience in software engineering with a strong foundation in Java or Golang and related backend technologies . Proven experience designing distributed systems, microservices, and event-driven architectures . Deep knowledge of cloud platforms (AWS/GCP), CI/CD, containerization ( Docker , Kubernetes ) and infrastructure as code . Strong understanding of data processing at scale using Kafka , NoSQL DBs (MongoDB/Cassandra) , Redis , and RDBMS (MySQL/PostgreSQL). Exposure to stream processing engines (e.g., Apache Storm/Flink/Spark) is a plus. Familiarity with AI tools and their integration into scalable systems is a plus. Experience with application security, fault tolerance, caching, multithreading, and performance tuning. A mindset of quality, ownership, and delivering business value. 💡 Why Netcore? Being first is in our nature. Netcore Cloud is the first and leading AI/ML-powered customer engagement and experience platform (CEE) that helps B2C brands increase engagement, conversions, revenue, and retention. Our cutting-edge SaaS products enable personalized engagement across the entire customer journey and build amazing digital experiences for businesses of all sizes. Netcore’s Engineering team focuses on adoption, scalability, complex challenges, and fastest processing. We use versatile tech stacks like streaming technologies and queue management systems such as Kafka , Storm , RabbitMQ , Celery , and RedisQ . Netcore strikes a perfect balance between experience and agility. We currently work with 5000+ enterprise brands across 18 countries , serving over 70% of India’s Unicorns , positioning us among the top-rated customer engagement & experience platforms. Headquartered in Mumbai, we have a global footprint across 10 countries , including the United States and Germany . Being certified as a Great Place to Work for three consecutive years reinforces Netcore’s principle of being a people-centric company — where you're not just an employee but part of a family. 🌟 What’s in it for You? Immense growth and continuous learning. Solve complex engineering problems at scale. Work with top industry talent and global brands. An open, entrepreneurial culture that values innovation. 📩 Ready to shape the future of digital customer engagement? Apply now— your next big opportunity starts here. A career at Netcore is more than just a job — it’s an opportunity to shape the future. Learn more at netcorecloud.com .

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join us as a Principal Engineer This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective You’ll actively contribute to our architecture, design and engineering centre of excellence, collaborating to improve the bank’s overall software engineering capability You’ll gain valuable stakeholder exposure as you build and leverage relationships, as well as the opportunity to hone your technical talents We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be working with software engineers in the production and prototyping of innovative ideas, engaging with domain and enterprise architects to validate and leverage these in wider contexts, by incorporating the relevant architectures. You'll be leading functional engineering teams, managing end-to-end product implementations, and driving demos and stakeholder engagement across platforms. We’ll also look to you to design and develop software with a focus on the automation of build, test and deployment activities, while developing the discipline of software engineering across the business. You’ll Also Be Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Designing, producing, testing and implementing the working code, along with applying Agile methods to the development of software with the use of DevOps techniques The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll bring significant and demonstrable experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance. You’ll Also Need Strong experience in gathering business requirements, translating them into technical user stories, and leading functional solution design—especially within the banking domain and CRM (MS Dynamics) Hands-on with PowerApps, D365 (including Custom Pages), and frontend configuration; proficient in Power BI (SQL, DAX, Power Query, Data Modelling, RLS, Azure, Lakehouse, Python, Spark SQL) A background in designing or implementing APIs The ability to rapidly and effectively understand and translate product and business requirements into technical solutions

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Senior Technical Trainer – Cloud, Data & AI/ML Location: Pune Experience Required : 10+ Years About the Role: We’re looking for an experienced and passionate technical trainer who can help elevate our teams’ capabilities in cloud technologies, data engineering, and AI/ML. This role is ideal for someone who enjoys blending hands-on tech skills with a strong ability to simplify, teach, and mentor. As we grow and scale at Meta For Data, building internal expertise is a key part of our strategy—and you’ll be central to that effort. What You’ll Be Doing: Lead and deliver in-depth training sessions (both live and virtual) across areas like cloud architecture, data engineering, and machine learning. Build structured training content including presentations, labs, exercises, and assessments. Develop learning journeys tailored to different experience levels and roles—ranging from new hires to experienced engineers. Continuously update training content to reflect changes in tools, platforms, and best practices. Collaborate with engineering, HR, and L&D teams to roll out training schedules, track attendance, and gather feedback. Support on-going learning post-training through mentoring, labs, and knowledge checks. What We’re Looking For: Around 10 years of experience in a mix of software development, cloud/data/ML engineering, and technical training. Deep familiarity with at least one cloud platform (AWS, Azure, or GCP); AWS or Azure is preferred. Strong grip on data platforms, ETL pipelines, Big Data tools (like Spark or Hadoop), and warehouse systems. Solid understanding of the AI/ML lifecycle—model building, tuning, deployment—with hands-on experience in Python-based libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Confident communicator who’s comfortable speaking to groups and explaining complex concepts simply. Bonus if you hold any relevant certifications like AWS Solutions Architect, Google Data Engineer, or Microsoft AI Engineer. Nice to Have: Experience creating online training modules or managing LMS platforms. Prior experience training diverse audiences: tech teams, analysts, product managers, etc. Familiarity with MLOps and modern deployment practices for AI models. Why Join Us? You’ll have the freedom to shape how technical learning happens at Meta For Data. You’ll be part of a team that values innovation, autonomy, and real impact. Flexible working options and a culture that supports growth - for our teams and our trainers.

Posted 1 day ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Solution Architect (Network Traffic & Flow Data systems) Location: Pune, India (with Travel to Onsite) Experience Required: 15+ years in solution architecture with at least 5 years in telecom data systems, network traffic monitoring, or real-time data streaming platforms. Overview : We are seeking a senior solution Architect to lead the design, integration, and delivery of a large-scale network traffic and data flow system. This role is accountable for ensuring architectural integrity, zero-error tolerance, and robust fallback mechanisms across the entire solution lifecycle. The architect will oversee subscriber data capture, DPI, DR generation, Kafka integration, DWH ingestion. and secure API-based retrieval, ensuring compliance and security regulations. Key Responsibilities: Own the end-to-end architecture spanning subscriber traffic capture, DPI, DR generation, Kafka streaming, and data lake ingestion. Design and document system architecture, data flow diagrams, and integration blueprints across DPI and traffic classification systems, nProbe, Kafka. Spark, and Cloudera CDP Implement fallback and error-handling mechanisms to ensure zero data loss and high availability across all layers. Lead cross-functional collaboration with network engineers, Kafka developers. data platform teams, and security stakeholders. Ensure data govemance, encryption, and compliance using tools like Apache Ranger, Atlas, SDX, and HashiCorp Vault. Oversee API design and exposure for customer access, including advanced search, session correlation, and audit logging. Drive SIT/UAT planning, performance benchmarking, and production rollout readiness. Provide technical leadership across multiple vendors and internal teams, ensuring alignment with Business requirements and regulatory standards, Required Skills & Qualifications: Proven experience in telecom-grade architecture involving DPI, IPFIX/NefFlow, and subscriber metadata enrichment. Deep knowledge of Apache Kafka, Spark Structured Streaming, and Cloudera CDP (HDFS, Hive, Iceberg, Ranger). Experience integrating QPtobe with Kafka and downstream analyfics platforms. Strong understanding of QoE metrics, A/B party correlation, and application traffic classification. Expertise in RESTful API design, schema management (Avro/JSON), and secure data access protocols. Familiarity with network interfaces (Gn/Gi, Radius, DNS) and traffic filtering strategies. Experience implementing fallback mechanisms, error queues, and disaster recovery strategies. Excellent communication, documentation, and stakeholder management skills. Cloudera Certified Architect / Kafka Developer / AWS or GCP Solution Architect. Security certifications (e.g., CISSP, CISM) will be advantageous

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Data Engineer We are looking for an experienced Data Engineer with strong expertise in Snowflake, dbt, Airflow, AWS, and modern data technologies like Python, Apache Spark, and NoSQL databases. The role focuses on designing, building, and optimizing data pipelines to support analytical and regulatory needs in the banking domain. Key Responsibilities Design and implement scalable and secure data pipelines using Airflow, dbt, Snowflake, and AWS services. Develop data transformation workflows and modular SQL logic using dbt for a centralized data warehouse in Snowflake. Build batch and near real-time data processing solutions using Apache Spark and Python. Work with structured and unstructured banking datasets stored across S3, NoSQL (e.g., MongoDB, DynamoDB), and relational databases. Ensure data quality, lineage, and observability through logging, testing, and monitoring tools. Support data needs for compliance, regulatory reporting, risk, fraud, and customer analytics. Ensure secure handling of sensitive data aligned with banking compliance standards (e.g., PII masking, role-based access). Collaborate closely with business users, data analysts, and data scientists to deliver production-grade datasets. Implement best practices for code versioning, CI/CD, and environment management Required Skills And Qualifications 5-8 years of experience in data engineering, preferably in banking, fintech, or regulated industries. Hands-on experience with: Snowflake (data modeling, performance tuning, security) dbt (modular SQL transformation, documentation, testing) Airflow (orchestration, DAGs) AWS (S3, Glue, Lambda, Redshift, IAM) Python (ETL scripting, data manipulation) Apache Spark (batch/stream processing using PySpark or Scala) NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra) Strong SQL skills and experience in performance optimization and cost-efficient query design. Exposure to data governance, compliance, and security in the banking industry. Experience working with large-scale datasets and complex data transformations. Familiarity with version control (e.g., Git) and CI/CD pipelines. Preferred Qualifications Prior experience in banking/financial services Knowledge of Kafka or other streaming platforms. Exposure to data quality tools (e.g., Great Expectations, Soda). Certifications in Snowflake, AWS, or dbt. Strong communication skills and ability to work with cross-functional teams. About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, result-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. Apply now if you’re ready to unleash your potential.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI/ML Scientist – Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief In this role as an AI/ML Scientist on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. You should be able to design, develop, and implement machine learning models, conduct deep data analysis, and support decision-making with data-driven insights. Responsibilities include building and validating predictive models, supporting experiment design, and integrating advanced techniques like transformers, GANs, and reinforcement learning into scalable production systems. The role requires solving complex problems using NLP, deep learning, optimization, and computer vision. You should be comfortable working independently, writing reliable code with automated tests, and contributing to debugging and refinement. You’ll also document your methods and results clearly and collaborate with cross-functional teams to deliver high-impact AI/ML solutions that align with business objectives and user needs. What I'll be doing – your accountabilities? Design, develop, and implement machine learning models, conduct in-depth data analysis, and support decision-making with data-driven insights Develop predictive models and validate their effectiveness Support the design of experiments to validate and compare multiple machine learning approaches Research and implement cutting-edge techniques (e.g., transformers, GANs, reinforcement learning) and integrate models into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative models, develop algorithms, or optimize workflows for data-driven tasks Independently apply data-driven solutions to ambiguous problems, leveraging tools like Natural Language Processing, deep learning frameworks, machine learning, optimization methods and computer vision frameworks Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Write and integrate automated tests alongside their models or code to ensure reproducibility, scalability, and alignment with established quality standards Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Mastered Data Analysis and Data Science concepts and can demonstrate this skill in complex scenarios AI & Machine Learning, Programming and Statistical Analysis Skills beyond the fundamentals and can demonstrate the skills in most situations without guidance. Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance: Data Validation and Testing Model Deployment Machine Learning Pipelines Deep Learning Natural Language Processing (NPL) Optimization & Scientific Computing Decision Modelling and Risk Analysis. To understand fundamentals and can demonstrate this skill in common scenarios with guidance: Technical Documentation. Qualifications & Requirements Bachelor’s degree in B.E./B.Tech, preferably in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong practical understanding of: Machine Learning algorithms (classification, regression, clustering, time-series) Statistical inference and probabilistic modeling Data wrangling, feature engineering, and preprocessing at scale Proficiency in collaborative development tools: IDEs (e.g., VS Code, Jupyter), Git/GitHub, CI/CD workflows, unit and integration testing Excellent coding and debugging skills in Python (preferred), with knowledge of SQL for large-scale data operations Experience working with: Versioned data pipelines, model reproducibility, and automated model testing Ability to work in agile product teams, handle ambiguity, and communicate effectively with both technical and business stakeholders Passion for continuous learning and applying AI/ML in impactful ways Preferred Experiences 5+ years of experience in AI/ML or Data Science roles, working on applied machine learning problems in production settings 5+ years of hands-on experience with: Apache Spark, distributed computing, and large-scale data processing Deep learning using TensorFlow or PyTorch Model serving via REST APIs, batch/streaming pipelines, or ML platforms Hands-on experience with: Cloud-native development (Azure preferred; AWS or GCP also acceptable) Databricks, Azure ML, or SageMaker platforms Experience with Docker, Kubernetes, and orchestration of ML systems in production Familiarity with A/B testing, causal inference, business impact modeling Exposure to visualization and monitoring tools: Power BI, Superset, Grafana Prior work in logistics, supply chain, operations research, or industrial AI use cases is a strong plus Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years of hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous. A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023

Posted 1 day ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Responsibility Data Handling and Processing: •Proficient in SQL Server and query optimization. •Expertise in application data design and process management. •Extensive knowledge of data modelling. •Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Microsoft Fabric. •Experience working with Azure Databricks. •Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). •Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. •Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. •Understanding of data governance, compliance, and security measures within Azure environments. Data Analysis and Visualization: •Experience in data analysis, statistical modelling, and machine learning techniques. •Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. •Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. •Experience in implementing Row-Level Security in Power BI. •Ability to work with medium-complex data models and quickly understand application data design and processes. •Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. •Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Non-Technical Skills: •Ability to lead a team of 4-5 developers and take ownership of deliverables. •Demonstrates a commitment to continuous learning, particularly with new technologies. •Strong communication skills in English, both written and verbal. •Able to effectively interact with customers during project implementation. •Capable of explaining complex technical concepts to non-technical stakeholders. Data Management: SQL, Azure Synapse Analytics, Azure Analysis Service and Data Marts, Microsoft Fabric ETL Tools: Azure Data Factory, Azure Data Bricks, Python, SSIS Data Visualization: Power BI, DAX

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies