Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
100.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About H.E. Services: At H.E. Services vibrant tech Center in Hyderabad, you will have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions. Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and up fitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. Roles & Responsibilities: Design, develop, and maintain data pipelines using Databricks , Spark , and other Azure cloud technologies. Optimize data pipelines for performance, scalability, and reliability, ensuring high speed and availability of data warehouse performance. Develop and maintain ETL processes using Databricks and Azure Data Factory for real-time or trigger-based data replication. Ensure data quality and integrity throughout the data lifecycle, implementing new data validation methods and analysis tools. Collaborate with data scientists, analysts, and stakeholders to understand and meet their data needs. Troubleshoot and resolve data-related issues, providing root cause analysis and recommendations. Manage a centralized data warehouse in Azure SQL to create a single source of truth for organizational data, ensuring compliance with data governance and security policies. Document data pipeline specifications, requirements, and enhancements, effectively communicating with the team and management. Leverage AI/ML capabilities to create innovative data science products. Champion and maintain testing suites, code reviews, and CI/CD processes. Must Have: Strong knowledge of Databricks architecture and tools. Proficient in SQL , Python , and PySpark for querying databases and data processing. Experience with Azure Data Lake Storage (ADLS) , Blob Storage , and Azure SQL . Deep understanding of distributed computing and Spark for data processing. Experience with data integration and ETL tools, including Azure Data Factory. Advanced-level knowledge and practice of: Data warehouse and data lake concepts and architectures. Optimizing performance of databases and servers. Managing infrastructure for storage and compute resources. Writing unit tests and scripts. Git, GitHub, and CI/CD practices. Good to Have: Experience with big data technologies, such as Kafka , Hadoop , and Hive . Familiarity with Azure Databricks Medallion Architecture with DLT and Iceberg. Experience with semantic layers and reporting tools like Power BI . Relevant Work Experience: 5+ years of experience as a Data Engineer, ETL Developer, or similar role, with a focus on Databricks and Spark. Experience working on internal, business-facing teams. Familiarity with agile development environments. Education and Training: Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience.
Posted 1 week ago
100.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
A legacy of excellence, driving innovation and personalized service to create exceptional customer experiences. About H.E. Services: At H.E. Services vibrant tech center in Hyderabad, you’ll have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and upfitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. The Business Intelligence Developer II will be responsible for designing, developing, and maintaining advanced data solutions. This role involves creating pipelines in Databricks for Silver (curated) and Gold (aggregated, high-value) layers of data, developing insightful dashboards in Power BI , and applying Machine Learning ( ML ) and Artificial Intelligence ( AI ) techniques to solve complex business problems Roles & Responsibilities: Develop and maintain data pipelines in Databricks for Silver and Gold layers, ensuring data quality and reliability. Optimize data workflows to handle large volumes of structured and unstructured data efficiently. Design and optimize Power BI semantic models, including creating star schemas, managing table relationships, and defining DAX measures to support robust reporting solutions. Create, enhance, and maintain interactive dashboards and reports in Power BI to provide actionable insights to stakeholders. Collaborate with business units to gather requirements and ensure dashboards meet user needs. Use Databricks and other platforms to build and operationalize ML/AI models to enhance decision-making. Work closely with data engineers, analysts, and business stakeholders to deliver scalable and innovative data solutions. Participate in code reviews, ensure best practices, and contribute to a culture of continuous improvement. Relevant Work Experience: 3-5 years of experience in business intelligence, data engineering, or a related role. Proficiency in Databricks (Spark, PySpark) for data processing and transformation. Strong expertise in Power BI for semantic model management, dashboarding and visualization. Experience building and deploying ML/AI models in Databricks or similar platforms. Must Technical Skills : Proficiency in SQL and Python. Solid understanding of ETL/ELT pipelines and data warehousing concepts. Familiarity with cloud platforms (e.g., Azure, AWS) and tools like Delta Lake. Git, GitHub, and CI/CD practices. Excellent problem-solving and analytical skills. Strong communication skills, with the ability to translate complex technical concepts into business-friendly language. Proven ability to work both independently and collaboratively in a fast-paced environment. Preferred Qualifications: Certifications in Power BI, Databricks, or cloud platforms. Experience with advanced analytics tools (e.g., TensorFlow, Scikit-learn, AutoML). Exposure to Agile methodologies and DevOps practices.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: Data Engineer Location: Hyderabad, India (Onsite) Fulltime. Job Description: We are seeking an experienced Data Engineer with 5-8 years of professional experience to design, build, and optimize robust and scalable data pipelines for our SmartFM platform. The ideal candidate will be instrumental in ingesting, transforming, and managing vast amounts of operational data from various building devices, ensuring high data quality and availability for analytics and AI/ML applications. This role is critical in enabling our platform to generate actionable insights, alerts, and recommendations for optimizing facility operations. ROLES AND RESPONSIBILITIES • Design, develop, and maintain scalable and efficient data ingestion pipelines from diverse sources (e.g., IoT devices, sensors, existing systems) using technologies like IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Kafka. • Implement robust data transformation and processing logic to clean, enrich, and structure raw data into formats suitable for analysis and machine learning models. • Manage and optimize data storage solutions, primarily within MongoDB, ensuring efficient schema design, data indexing, and query performance for large datasets. • Collaborate closely with Data Scientists to understand their data needs, provide high-quality, reliable datasets, and assist in deploying data-driven solutions. • Ensure data quality, consistency, and integrity across all data pipelines and storage systems, implementing monitoring and alerting mechanisms for data anomalies. • Work with cross-functional teams (Software Engineers, Data Scientists, Product Managers) to integrate data solutions with the React frontend and Node.js backend applications. • Contribute to the continuous improvement of data architecture, tooling, and best practices, advocating for scalable and maintainable data solutions. • Troubleshoot and resolve complex data-related issues, optimizing pipeline performance and ensuring data availability. • Stay updated with emerging data engineering technologies and trends, evaluating and recommending new tools and approaches to enhance our data capabilities. REQUIRED TECHNICAL SKILLS AND EXPERIENCE • 5-8 years of professional experience in Data Engineering or a related field. • Proven hands-on experience with data pipeline tools such as IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Apache Kafka. • Strong expertise in database management, particularly with MongoDB, including schema design, data ingestion pipelines, and data aggregation. • Proficiency in at least one programming language commonly used in data engineering, such as Python or Java/Scala. • Experience with big data technologies and distributed processing frameworks (e.g., Apache Spark, Hadoop) is highly desirable. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their data services. • Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. • Experience with DevOps practices for data pipelines (CI/CD, monitoring, logging). • Knowledge of Node.js and React environments to facilitate seamless integration with existing applications. ADDITIONAL QUALIFICATIONS • Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. • Strong problem-solving and analytical skills with a meticulous approach to data quality. • Experienced in collaborating and communicating seamlessly with diverse technology roles, including development, support, and product management. • Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in data engineering and business needs. • Experience in the facility management domain or IoT data is a plus. EDUCATION REQUIREMENTS / EXPERIENCE • Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Mathematics, Statistics, or a related quantitative field.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Customer is seeking skilled and motivated professionals to join our project team supporting Customer across multiple data and AI product domains. The candidates will be part of a dynamic, cloud-native data engineering and analytics environment, working alongside domain leads to support global initiatives. Key Responsibilities: Work on the development and enhancement of data products using modern cloud-based technologies. Collaborate with Customer domain leads and stakeholders to translate requirements into scalable data solutions. Build and maintain data pipelines using AWS and Azure cloud platforms. Support integration with Snowflake-based data warehouses. Ensure solutions are well-documented, scalable, and aligned with enterprise architecture principles. Participate in discussions related to data architecture, best practices, and governance. Technology Stack: Cloud Platforms: AWS (Primary), Azure (Secondary) Data Platforms: Snowflake Languages & Tools: Python, SQL, Spark (preferred), Terraform (optional) Other Tools: Git, CI/CD tools, JIRA, Confluence Required Skills & Experience: 3–8 years of experience in data engineering, analytics engineering, or cloud-native solution delivery. Strong experience in building data pipelines in AWS and/or Azure environments. Hands-on experience with Snowflake is essential. Ability to work with distributed teams and collaborate directly with stakeholders. Strong problem-solving, communication, and documentation skills.
Posted 1 week ago
14.0 years
0 Lacs
India
Remote
Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Senior Engineering Manager on Twilio’s Traffic Intelligence team. About The Job This position is needed to manage the team of machine learning engineers of the Growth & User Intelligence team and closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build ML and Data Science products that work at a global scale and own end-to-end execution of large scale ML solutions. As a senior manager, you will closely partner with technology and product leaders in the organization to enable the engineers to turn ideas into reality. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions for Traffic Intelligence vertical. Be a champion for your team, setting individuals up for success and putting others’ growth first. Understand the architecture and processes required to build and operate always-available complex and scalable distributed systems in cloud environments. Advocate agile processes, continuous integration and test automation. Be a strategic problem solver and thrive operating in broad scope, from conception through continuous operation of 24x7 services. Exhibit strong communication skills: in person, or on paper. You can explain technical concepts to product managers, architects, other engineers, and support. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required You have a minimum of 14+ years experience with 5 years of proven track record of leading and managing software teams. Experience managing multiple workstreams within the team Bachelor’s or Master’s degree in Computer Science, Engineering or related field. Technical Experience with: Applied ML models with proficiency in Python Experience in modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience in Cloud technologies like AWS, GCP etc. Experience in ML frameworks like PyTorch, TensorFlow, or Keras etc. SaaS Telemetry and Observability tools such as Datadog, Graphana etc. Excellent problem solving, critical thinking, and communication skills. Broad knowledge of development environments and tools used to implement and build code for deployment. Have strong familiarity with agile processes, continuous integration, and a strong belief in automation over toil. As a pragmatist, you are able to distill complex and ambiguous situations into actionable plans for your team. Owned and operated services end-to-end, from requirements gathering and design, to debugging and testing, to release management and operational monitoring. Desired Experience with Large Language Models Experience designing and implementing highly scalable and performant ML models. Location This role will be remote, and based in India(Karnataka, Tamil Nadu, Telangana, Maharashtra & New Delhi) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description As the senior data scientist, your role involves spearheading the development and execution of data-driven solutions for clients. Collaborating closely with clients, you will adeptly grasp their business needs, translating them into an AI/ML framework. Your expertise will be pivotal in designing models and selecting suitable techniques to address the client's specific challenges. Responsible for the entire data science project lifecycle, your duties extend from comprehensive data collection to meticulous model development, deployment, maintenance, and optimization. Your focus will particularly centre on crafting machine learning and deep learning models customized for retail and customer analytics, incorporating champion-challenger models to enhance performance. Effective communication with senior stakeholders is imperative in this role, and your proficiency in Python coding will be crucial for seamless end-to-end model development. As the lead data scientist, you will play a key role in driving innovative solutions that align with client objectives and industry best practices. You should possess good communication and project management skills and can communicate effectively with a wide range of audiences, both technical and business. You would be responsible for creating Presentations, reports etc to present the analysis findings to the end clients/stakeholders. Should possess the ability to confidently socialize business recommendations and enable customer organization to implement such recommendations. You must familiar and implement with a range of models including regression, classification, clustering, decision tree, random forest, support vector machine, naïve Bayes, GBM, XGBoost, multiple linear regression, logistic regression, and ARIMA/ARIMAX. You should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models Qualifications:Python for Data Science (mandatory), Good proficiency in end to end coding which includes deployment experience. Experience processing large data. Min. 3 years exp in Retail domain Preferred skills include proficiency in SQL, Spark, Excel, Azure, AWS, GCP, Power BI, and Flask. Preferred experience in areas such as time series analysis, market mix modelling, attribution modelling, churn modelling, market basket analysis, etc.Possess a strong understanding of mathematics with logical thinking abilities.Excellent communication skills are a must. Qualifications BTech/Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1-2 institutions Or BE/B-Tech, MCA or MBARelevant Experience:8+ years of hands on experience in delivering Data Science/Analytics projects.
Posted 1 week ago
1.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements Identify and analyze issues, make recommendations, and implement solutions Utilize knowledge of business processes, system processes, and industry standards to solve complex issues Analyze information and make evaluative judgements to recommend solutions and improvements Conduct testing and debugging, utilize script tools, and write basic code for design specifications Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures Develop working knowledge of Citi’s information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 4+ years of relevant experience Experience in programming/debugging used in business applications Working knowledge of industry practice and standards Comprehensive knowledge of specific business area for application development Working knowledge of program languages Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description 3 to 5 years of experience in data engineering with Azure cloud services. Strong expertise in Azure Data Factory (ADF) for pipeline orchestration. Hands-on experience with Azure Event Hub for real-time data streaming. Proficient in Python (PySpark, Pandas, scripting) and SQL for data processing Extensive experience with Azure Databricks (Spark, Delta Lake) and Dbt Experience with CI/CD, Git, and Infrastructure as Code (IaC)Familiarity with Snowflake Exposure to Power BI Knowledge of SQL, NoSQL, and data warehousing concepts. Strong problem-solving and debugging skills. Qualifications Graduate/Post Graduate
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Engineer 📍 Location: Gurugram, India 🕒 Experience: 6–8 years 🧑 💻 Employment Type: Full-time Key Responsibilities Design, build, and optimize scalable data pipelines to support advanced Media Mix Modeling (MMM) and Multi-Touch Attribution (MTA) models. Collaborate with Data Scientists to prepare data for training, validation, and deployment of machine learning models and statistical algorithms. Ingest and transform large volumes of structured and unstructured data from multiple sources, ensuring data quality and integrity. Partner with cross-functional teams (AdSales, Analytics, and Product) to deliver reliable data solutions that drive marketing effectiveness and campaign performance. Automate data workflows and build reusable components for model deployment, data validation, and reporting. Support data scientists with efficient access to cleaned and transformed data, optimizing for both performance and usability. Contribute to the design of a unified data architecture supporting AdTech, OTT, and digital media ecosystems . Stay updated with the latest trends in data engineering, AI-driven analytics, and cloud-native tools to improve data delivery and model deployment processes. Required Skills & Experience 6+ years of hands-on experience in Data Engineering , data analytics, or related roles. At least 3 years working in AdTech , AdSales , or digital media analytics environments. Experience supporting MMM and MTA modeling efforts with high-quality, production-ready data pipelines. Proficiency in Python , SQL , and data transformation tools; experience with R is a plus. Strong knowledge of data modeling , ETL pipelines , and handling large-scale datasets using distributed systems (e.g., Spark, AWS, or GCP). Familiarity with cloud platforms (AWS, Azure, or GCP) and data services (S3, Redshift, BigQuery, Snowflake, etc.). Experience with BI tools such as Tableau, Power BI, or Looker for report automation and insight generation. Solid understanding of statistical techniques , A/B testing , and model evaluation metrics. Excellent communication and collaboration skills to work with both technical and non-technical stakeholders. Preferred Qualifications Experience in media or OTT data environments. Exposure to machine learning model deployment , model monitoring, and MLOps practices. Knowledge of Kafka , Airflow , or dbt for orchestration and transformation.
Posted 1 week ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position : Senior Technical Leader - Backend Location : Mumbai, India [Thane] Team : Engineering Experience : 12+ Years 🚀 Are you a seasoned technical leader looking to drive engineering excellence at scale? At Netcore Cloud, we’re seeking a Senior Technical Leader who brings deep technical expertise, a track record of designing scalable systems, and a passion for innovation. This is a high-impact role where you will lead the architecture and design of mission-critical systems that power user engagement for thousands of global brands. 🛠️ What You’ll Do Architect highly available, scalable, and fault-tolerant backend systems handling billions of events and terabytes of data. Design real-time campaign processing engines capable of delivering 10 million+ messages per minute. Lead development of complex analytics frameworks including cohort analysis, funnel tracking, and user behavior modeling. Drive architecture decisions on distributed systems, microservices, and cloud-native platforms. Define technical roadmaps and work closely with engineering teams to ensure alignment and execution. Collaborate across product, engineering, DevOps, and data teams to deliver business-critical functionality. Mentor engineers and contribute to engineering excellence through code and design reviews, best practice evangelism, and training. Evaluate and implement tools and frameworks for continuous improvement in scalability, performance, and observability. 🧠 What You Bring 12+ years of hands-on experience in software engineering with a strong foundation in Java or Golang and related backend technologies . Proven experience designing distributed systems, microservices, and event-driven architectures . Deep knowledge of cloud platforms (AWS/GCP), CI/CD, containerization ( Docker , Kubernetes ) and infrastructure as code . Strong understanding of data processing at scale using Kafka , NoSQL DBs (MongoDB/Cassandra) , Redis , and RDBMS (MySQL/PostgreSQL). Exposure to stream processing engines (e.g., Apache Storm/Flink/Spark) is a plus. Familiarity with AI tools and their integration into scalable systems is a plus. Experience with application security, fault tolerance, caching, multithreading, and performance tuning. A mindset of quality, ownership, and delivering business value. 💡 Why Netcore? Being first is in our nature. Netcore Cloud is the first and leading AI/ML-powered customer engagement and experience platform (CEE) that helps B2C brands increase engagement, conversions, revenue, and retention. Our cutting-edge SaaS products enable personalized engagement across the entire customer journey and build amazing digital experiences for businesses of all sizes. Netcore’s Engineering team focuses on adoption, scalability, complex challenges, and fastest processing. We use versatile tech stacks like streaming technologies and queue management systems such as Kafka , Storm , RabbitMQ , Celery , and RedisQ . Netcore strikes a perfect balance between experience and agility. We currently work with 5000+ enterprise brands across 18 countries , serving over 70% of India’s Unicorns , positioning us among the top-rated customer engagement & experience platforms. Headquartered in Mumbai, we have a global footprint across 10 countries , including the United States and Germany . Being certified as a Great Place to Work for three consecutive years reinforces Netcore’s principle of being a people-centric company — where you're not just an employee but part of a family. 🌟 What’s in it for You? Immense growth and continuous learning. Solve complex engineering problems at scale. Work with top industry talent and global brands. An open, entrepreneurial culture that values innovation. 📩 Ready to shape the future of digital customer engagement? Apply now— your next big opportunity starts here. A career at Netcore is more than just a job — it’s an opportunity to shape the future. Learn more at netcorecloud.com .
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Principal Engineer This is a challenging role that will see you design and engineer software with the customer or user experience as the primary objective You’ll actively contribute to our architecture, design and engineering centre of excellence, collaborating to improve the bank’s overall software engineering capability You’ll gain valuable stakeholder exposure as you build and leverage relationships, as well as the opportunity to hone your technical talents We're offering this role at vice president level What you'll do As a Principal Engineer, you’ll be creating great customer outcomes via engineering and innovative solutions to existing and new challenges, and technology designs which are innovative, customer centric, high performance, secure and robust. You’ll be working with software engineers in the production and prototyping of innovative ideas, engaging with domain and enterprise architects to validate and leverage these in wider contexts, by incorporating the relevant architectures. You'll be leading functional engineering teams, managing end-to-end product implementations, and driving demos and stakeholder engagement across platforms. We’ll also look to you to design and develop software with a focus on the automation of build, test and deployment activities, while developing the discipline of software engineering across the business. You’ll Also Be Defining, creating and providing oversight and governance of engineering and design solutions with a focus on end-to-end automation, simplification, resilience, security, performance, scalability and reusability Working within a platform or feature team along with software engineers to design and engineer complex software, scripts and tools to enable the delivery of bank platforms, applications and services, acting as a point of contact for solution design considerations Defining and developing architecture models and roadmaps of application and software components to meet business and technical requirements, driving common usability across products and domains Designing, producing, testing and implementing the working code, along with applying Agile methods to the development of software with the use of DevOps techniques The skills you'll need You’ll come with significant experience in software engineering, software or database design and architecture, as well as experience of developing software within a DevOps and Agile framework. Along with an expert understanding of the latest market trends, technologies and tools, you’ll bring significant and demonstrable experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance. You’ll Also Need Strong experience in gathering business requirements, translating them into technical user stories, and leading functional solution design—especially within the banking domain and CRM (MS Dynamics) Hands-on with PowerApps, D365 (including Custom Pages), and frontend configuration; proficient in Power BI (SQL, DAX, Power Query, Data Modelling, RLS, Azure, Lakehouse, Python, Spark SQL) A background in designing or implementing APIs The ability to rapidly and effectively understand and translate product and business requirements into technical solutions
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Technical Trainer – Cloud, Data & AI/ML Location: Pune Experience Required : 10+ Years About the Role: We’re looking for an experienced and passionate technical trainer who can help elevate our teams’ capabilities in cloud technologies, data engineering, and AI/ML. This role is ideal for someone who enjoys blending hands-on tech skills with a strong ability to simplify, teach, and mentor. As we grow and scale at Meta For Data, building internal expertise is a key part of our strategy—and you’ll be central to that effort. What You’ll Be Doing: Lead and deliver in-depth training sessions (both live and virtual) across areas like cloud architecture, data engineering, and machine learning. Build structured training content including presentations, labs, exercises, and assessments. Develop learning journeys tailored to different experience levels and roles—ranging from new hires to experienced engineers. Continuously update training content to reflect changes in tools, platforms, and best practices. Collaborate with engineering, HR, and L&D teams to roll out training schedules, track attendance, and gather feedback. Support on-going learning post-training through mentoring, labs, and knowledge checks. What We’re Looking For: Around 10 years of experience in a mix of software development, cloud/data/ML engineering, and technical training. Deep familiarity with at least one cloud platform (AWS, Azure, or GCP); AWS or Azure is preferred. Strong grip on data platforms, ETL pipelines, Big Data tools (like Spark or Hadoop), and warehouse systems. Solid understanding of the AI/ML lifecycle—model building, tuning, deployment—with hands-on experience in Python-based libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Confident communicator who’s comfortable speaking to groups and explaining complex concepts simply. Bonus if you hold any relevant certifications like AWS Solutions Architect, Google Data Engineer, or Microsoft AI Engineer. Nice to Have: Experience creating online training modules or managing LMS platforms. Prior experience training diverse audiences: tech teams, analysts, product managers, etc. Familiarity with MLOps and modern deployment practices for AI models. Why Join Us? You’ll have the freedom to shape how technical learning happens at Meta For Data. You’ll be part of a team that values innovation, autonomy, and real impact. Flexible working options and a culture that supports growth - for our teams and our trainers.
Posted 1 week ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Solution Architect (Network Traffic & Flow Data systems) Location: Pune, India (with Travel to Onsite) Experience Required: 15+ years in solution architecture with at least 5 years in telecom data systems, network traffic monitoring, or real-time data streaming platforms. Overview : We are seeking a senior solution Architect to lead the design, integration, and delivery of a large-scale network traffic and data flow system. This role is accountable for ensuring architectural integrity, zero-error tolerance, and robust fallback mechanisms across the entire solution lifecycle. The architect will oversee subscriber data capture, DPI, DR generation, Kafka integration, DWH ingestion. and secure API-based retrieval, ensuring compliance and security regulations. Key Responsibilities: Own the end-to-end architecture spanning subscriber traffic capture, DPI, DR generation, Kafka streaming, and data lake ingestion. Design and document system architecture, data flow diagrams, and integration blueprints across DPI and traffic classification systems, nProbe, Kafka. Spark, and Cloudera CDP Implement fallback and error-handling mechanisms to ensure zero data loss and high availability across all layers. Lead cross-functional collaboration with network engineers, Kafka developers. data platform teams, and security stakeholders. Ensure data govemance, encryption, and compliance using tools like Apache Ranger, Atlas, SDX, and HashiCorp Vault. Oversee API design and exposure for customer access, including advanced search, session correlation, and audit logging. Drive SIT/UAT planning, performance benchmarking, and production rollout readiness. Provide technical leadership across multiple vendors and internal teams, ensuring alignment with Business requirements and regulatory standards, Required Skills & Qualifications: Proven experience in telecom-grade architecture involving DPI, IPFIX/NefFlow, and subscriber metadata enrichment. Deep knowledge of Apache Kafka, Spark Structured Streaming, and Cloudera CDP (HDFS, Hive, Iceberg, Ranger). Experience integrating QPtobe with Kafka and downstream analyfics platforms. Strong understanding of QoE metrics, A/B party correlation, and application traffic classification. Expertise in RESTful API design, schema management (Avro/JSON), and secure data access protocols. Familiarity with network interfaces (Gn/Gi, Radius, DNS) and traffic filtering strategies. Experience implementing fallback mechanisms, error queues, and disaster recovery strategies. Excellent communication, documentation, and stakeholder management skills. Cloudera Certified Architect / Kafka Developer / AWS or GCP Solution Architect. Security certifications (e.g., CISSP, CISM) will be advantageous
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Data Engineer We are looking for an experienced Data Engineer with strong expertise in Snowflake, dbt, Airflow, AWS, and modern data technologies like Python, Apache Spark, and NoSQL databases. The role focuses on designing, building, and optimizing data pipelines to support analytical and regulatory needs in the banking domain. Key Responsibilities Design and implement scalable and secure data pipelines using Airflow, dbt, Snowflake, and AWS services. Develop data transformation workflows and modular SQL logic using dbt for a centralized data warehouse in Snowflake. Build batch and near real-time data processing solutions using Apache Spark and Python. Work with structured and unstructured banking datasets stored across S3, NoSQL (e.g., MongoDB, DynamoDB), and relational databases. Ensure data quality, lineage, and observability through logging, testing, and monitoring tools. Support data needs for compliance, regulatory reporting, risk, fraud, and customer analytics. Ensure secure handling of sensitive data aligned with banking compliance standards (e.g., PII masking, role-based access). Collaborate closely with business users, data analysts, and data scientists to deliver production-grade datasets. Implement best practices for code versioning, CI/CD, and environment management Required Skills And Qualifications 5-8 years of experience in data engineering, preferably in banking, fintech, or regulated industries. Hands-on experience with: Snowflake (data modeling, performance tuning, security) dbt (modular SQL transformation, documentation, testing) Airflow (orchestration, DAGs) AWS (S3, Glue, Lambda, Redshift, IAM) Python (ETL scripting, data manipulation) Apache Spark (batch/stream processing using PySpark or Scala) NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra) Strong SQL skills and experience in performance optimization and cost-efficient query design. Exposure to data governance, compliance, and security in the banking industry. Experience working with large-scale datasets and complex data transformations. Familiarity with version control (e.g., Git) and CI/CD pipelines. Preferred Qualifications Prior experience in banking/financial services Knowledge of Kafka or other streaming platforms. Exposure to data quality tools (e.g., Great Expectations, Soda). Certifications in Snowflake, AWS, or dbt. Strong communication skills and ability to work with cross-functional teams. About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, result-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a culture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. Apply now if you’re ready to unleash your potential.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI/ML Scientist – Global Data Analytics, Technology (Maersk) This position will be based in India – Bangalore A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. The Brief In this role as an AI/ML Scientist on the Global Data and Analytics (GDA) team, you will support the development of strategic, visibility-driven recommendation systems that serve both internal stakeholders and external customers. This initiative aims to deliver actionable insights that enhance supply chain execution, support strategic decision-making, and enable innovative service offerings. You should be able to design, develop, and implement machine learning models, conduct deep data analysis, and support decision-making with data-driven insights. Responsibilities include building and validating predictive models, supporting experiment design, and integrating advanced techniques like transformers, GANs, and reinforcement learning into scalable production systems. The role requires solving complex problems using NLP, deep learning, optimization, and computer vision. You should be comfortable working independently, writing reliable code with automated tests, and contributing to debugging and refinement. You’ll also document your methods and results clearly and collaborate with cross-functional teams to deliver high-impact AI/ML solutions that align with business objectives and user needs. What I'll be doing – your accountabilities? Design, develop, and implement machine learning models, conduct in-depth data analysis, and support decision-making with data-driven insights Develop predictive models and validate their effectiveness Support the design of experiments to validate and compare multiple machine learning approaches Research and implement cutting-edge techniques (e.g., transformers, GANs, reinforcement learning) and integrate models into production systems, ensuring scalability and reliability Apply creative problem-solving techniques to design innovative models, develop algorithms, or optimize workflows for data-driven tasks Independently apply data-driven solutions to ambiguous problems, leveraging tools like Natural Language Processing, deep learning frameworks, machine learning, optimization methods and computer vision frameworks Understand technical tools and frameworks used by the team, including programming languages, libraries, and platforms and actively support debugging or refining code in projects Write and integrate automated tests alongside their models or code to ensure reproducibility, scalability, and alignment with established quality standards Contribute to the design and documentation of AI/ML solutions, clearly detailing methodologies, assumptions, and findings for future reference and cross-team collaboration Collaborate across teams to develop and implement high-quality, scalable AI/ML solutions that align with business goals, address user needs, and improve performance Foundational Skills Mastered Data Analysis and Data Science concepts and can demonstrate this skill in complex scenarios AI & Machine Learning, Programming and Statistical Analysis Skills beyond the fundamentals and can demonstrate the skills in most situations without guidance. Specialized Skills To be able to understand beyond the fundamentals and can demonstrate in most situations without guidance: Data Validation and Testing Model Deployment Machine Learning Pipelines Deep Learning Natural Language Processing (NPL) Optimization & Scientific Computing Decision Modelling and Risk Analysis. To understand fundamentals and can demonstrate this skill in common scenarios with guidance: Technical Documentation. Qualifications & Requirements Bachelor’s degree in B.E./B.Tech, preferably in Computer Science, Data Science, Mathematics, Statistics, or related fields. Strong practical understanding of: Machine Learning algorithms (classification, regression, clustering, time-series) Statistical inference and probabilistic modeling Data wrangling, feature engineering, and preprocessing at scale Proficiency in collaborative development tools: IDEs (e.g., VS Code, Jupyter), Git/GitHub, CI/CD workflows, unit and integration testing Excellent coding and debugging skills in Python (preferred), with knowledge of SQL for large-scale data operations Experience working with: Versioned data pipelines, model reproducibility, and automated model testing Ability to work in agile product teams, handle ambiguity, and communicate effectively with both technical and business stakeholders Passion for continuous learning and applying AI/ML in impactful ways Preferred Experiences 5+ years of experience in AI/ML or Data Science roles, working on applied machine learning problems in production settings 5+ years of hands-on experience with: Apache Spark, distributed computing, and large-scale data processing Deep learning using TensorFlow or PyTorch Model serving via REST APIs, batch/streaming pipelines, or ML platforms Hands-on experience with: Cloud-native development (Azure preferred; AWS or GCP also acceptable) Databricks, Azure ML, or SageMaker platforms Experience with Docker, Kubernetes, and orchestration of ML systems in production Familiarity with A/B testing, causal inference, business impact modeling Exposure to visualization and monitoring tools: Power BI, Superset, Grafana Prior work in logistics, supply chain, operations research, or industrial AI use cases is a strong plus Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years of hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous. A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023
Posted 1 week ago
1.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Responsibility Data Handling and Processing: •Proficient in SQL Server and query optimization. •Expertise in application data design and process management. •Extensive knowledge of data modelling. •Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Microsoft Fabric. •Experience working with Azure Databricks. •Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). •Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. •Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. •Understanding of data governance, compliance, and security measures within Azure environments. Data Analysis and Visualization: •Experience in data analysis, statistical modelling, and machine learning techniques. •Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. •Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. •Experience in implementing Row-Level Security in Power BI. •Ability to work with medium-complex data models and quickly understand application data design and processes. •Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. •Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Non-Technical Skills: •Ability to lead a team of 4-5 developers and take ownership of deliverables. •Demonstrates a commitment to continuous learning, particularly with new technologies. •Strong communication skills in English, both written and verbal. •Able to effectively interact with customers during project implementation. •Capable of explaining complex technical concepts to non-technical stakeholders. Data Management: SQL, Azure Synapse Analytics, Azure Analysis Service and Data Marts, Microsoft Fabric ETL Tools: Azure Data Factory, Azure Data Bricks, Python, SSIS Data Visualization: Power BI, DAX
Posted 1 week ago
0 years
0 - 1 Lacs
Bengaluru, Karnataka, India
On-site
Wirality is a modern-day advertising agency, Infused with the DNA of a digital-first environment and an entrepreneurial spirit. Our fundamental belief is that there are brands and consumers, and then there is the internet, that connects the two like a bridge. We operate on this bridge, bridging the gap between the two by creating relevant cultural conversations. We achieve this through our philosophy of ART X MATH, an integrated approach between digital creative & media, helping us deliver a higher ROI. Being an independent agency affords us the freedom to be bold and stray from convention. This Is Where You Come In Can use their creative craft, understand audience sentiments and grasp today's culture to solve creative problems for a variety of brands on social and digital platforms. Understands the difference between brand building and tactical execution. Enjoys having the most challenging role in the creative department as duties under this role are split between both doing the work as well as managing the team Will spend roughly 50% of the time working on solving briefs and 50% of time managing projects. Understands latest digital platforms and how the algorithm works on that end. As a creative lead, understands the value of a strong content strategy & visual guideline, you make sure the work you produce is grounded in insights + creativity and are effectively communicated Someone who is comfortable when it comes to video production and can work with the video team. Knows how to develop ideas that are responsible in terms of timeline and budget You must collaborate effectively with members of the team to get the best product possible (though we ensure work never piles up) As a leader of a creative unit, you must understand that you need to maintain the standard and be an example to the team. A positive attitude is more important than your creative skill or the work you produce. Has experience of working with Paid Social Core Qualifications Include Exceptional writing, video or design skills Ability to conceptualize Comfortable with client interaction Natural leadership tendencies Experience working with all formats of social media and digital content Proven social media understanding A curious researcher Exceptional ability to plan work and manage teams Other Qualifications Include Loves TV and cinema Good collaborator Great with feedback and revisions Consistency in work Time management A good sense of humour and wit Be proactive, be a leader. Other Requirements Ability to commute to work Get us all tea (Just kidding, we drink coffee) We pride ourselves on being a human-first company and want to make this home for everyone who works with us. The Platinum Rules For Working Here Are Solution over problem Be collaborative Honey > Vinegar Our Hiring Process Think of this as a mini reality show—minus the drama, plus a lot more Wi-Fi. Resume Shortlisting We scan your resume like a hawk (with glasses), looking for experience, spark, and signs of caffeine addiction. Screening Call Our HR team will give you a ring to chat about your experience, vibe-check your energy, and confirm you're not a cat using a keyboard. Initial Interview You’ll meet our hiring panel. Expect deep dives into your work, a few “what would you do if…” questions, and some laughs. Assignment Round Time to show us you walk the talk. You'll get a small task to prove your chops. Think of it as your creative entrance exam. Final Interview A chat with our founder—no pressure. Just bring your A-game and be real. (Bonus points if you make them smile.) Offer & Negotiation If we’re all feeling the love, we’ll talk numbers, benefits, and everything else that makes this official. Note: Due to overwhelming responses in the past, only shortlisted candidates will be responded to. Note: This is a unpaid internship.Skills: conceptualization,client interaction,design,creative craft,audience sentiment analysis,writing,digital content,tactical execution,visual guidelines,research,content strategy,team management,social media,brand building,time management,writing brief,video production
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Senior Software Engineer – Backend (Python) 📍 Location: Hyderabad (Hybrid) 🕒 Experience: 5 – 12 years About the Role: We are looking for a Senior Software Engineer – Backend with strong expertise in Python and modern big data technologies. This role involves building scalable backend solutions for a leading healthcare product-based company. Key Skills: Programming: Python, Spark-Scala, PySpark (PySpark API) Big Data: Hadoop, Databricks Data Engineering: SQL, Kafka Strong problem-solving skills and experience in backend architecture Why Join? Hybrid work model in Hyderabad Opportunity to work on innovative healthcare products Collaborative environment with modern tech stack Keywords for Search: Python, PySpark, Spark, Spark-Scala, Hadoop, Databricks, Kafka, SQL, Backend Development, Big Data Engineering, Healthcare Technology
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
About Us We are a global leader in food & beverage ingredients. Pioneers at heart, we operate at the forefront of consumer trends to provide food & beverage manufacturers with products and ingredients that will delight their consumers. Making a positive impact on people and planet is all part of the delight. With a deep-rooted presence in the countries where our ingredients are grown, we are closer to farmers, enabling better quality, and more reliable, traceable and transparent supply. Supplying products and ingredients at scale is just the start. We add value through our unique, complementary portfolio of natural, delicious and nutritious products. With our fresh thinking, we help our customers unleash the sensory and functional attributes of cocoa, coffee, dairy, nuts and spices so they can create naturally good food & beverage products that meet consumer expectations. And whoever we’re with, whatever we’re doing, we always make it real . Introduction At ofi, we are at the forefront of harnessing cutting-edge technology to revolutionize our operations. We aim to leverage machine learning and artificial intelligence to drive transformative business outcomes and create value for our clients. We are committed to a culture of innovation, diversity, and continuous improvement, where every team member can contribute and thrive. As a ML Engineer, you will be crucial in developing advanced algorithms and models to tackle complex problems. Your expertise will drive the deployment and upkeep of intelligent systems that enhance our products and services. You will work within a collaborative environment, leveraging data and machine learning to influence business strategies and improve operational efficiency. Key Deliverables Deliver end-to-end ML solutions: Architect and implement state-of-the-art models—classification, regression, clustering, reinforcement learning—precisely tuned to solve high-value business problems. Engineer data & experimentation pipelines at scale: Build reliable, self-service pipelines for ingesting, cleaning, transforming, and aggregating data, and orchestrate rigorous offline/online experiments (cross-validation, A/B tests) to benchmark accuracy, latency, and resource cost. Embed ML seamlessly into products: Partner with data scientists, backend/frontend engineers, and designers to wire models into production services and user experiences, ensuring low-friction integration and measurable product impact. Operate, monitor, and evolve models in production: Own the DevOps stack—automated CI/CD, containerization, and cloud deployment—and run real-time monitoring to detect drift, performance degradation, and anomalies, triggering retraining or rollback as needed. Uphold engineering excellence & knowledge sharing: Enforce rigorous code quality, version control, testing, and documentation; lead code reviews and mentoring sessions that raise the team’s ML craftsmanship. Safeguard data privacy, security, and compliance: Design models and pipelines that meet regulatory requirements, apply robust access controls and encryption, and audit usage to ensure ethical and secure handling of sensitive data. Qualification & Skills Formal grounding in computing & AI: Bachelor’s / Master’s in Computer Science, Data Science, or a related quantitative field. Proven production experience: 4+ years shipping, deploying, and maintaining machine-learning models at scale, with a track record of solving complex, real-world problems. End-to-end technical toolkit: Python (Pandas, NumPy), ML frameworks (TensorFlow, PyTorch, scikit-learn), databases (SQL & NoSQL), and big-data stacks (Spark, Hadoop). MLOps & cloud deployment mastery: Containerization (Docker, Kubernetes), CI/CD pipelines, and monitoring workflows that keep models reliable and reproducible in production. Deep applied-ML expertise: Supervised and unsupervised learning, NLP, computer vision, and time-series analysis, plus strong model-evaluation and feature-engineering skills. Collaboration & communication strength: Clear communicator and effective team player who can translate business goals into technical solutions and articulate results to diverse stakeholders. ofi is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, nationality, disability, protected veteran status, sexual orientation, gender identity, gender expression, genetic information, or any other characteristic protected by law. Applicants are requested to complete all required steps in the application process including providing a resume/CV in order to be considered for open roles.
Posted 1 week ago
7.0 years
0 Lacs
India
Remote
Description Demand Generation Manager India, Remote EGNYTE YOUR CAREER. SPARK YOUR PASSION. Role Egnyte is a place where we spark opportunities for amazing people. We believe that every role has a great impact , and every Egnyter should be respected. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values: Invested Relationships Fiscal Prudence Candid Conversations About Egnyte Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com . Our GTM Strategy Team is the driving force behind the seamless functioning of go to market initiatives within the organization. Tasked with optimizing processes and leveraging technology, this team ensures the efficient delivery of GTM programs. By analyzing data, implementing effective tools, and collaborating across departments, the GTM Strategy team contributes to the enhancement of sales experiences and the overall success of the organization. Their strategic planning and cross-functional coordination play a critical role in not only retaining customers but also fostering growth and ensuring the continual delivery of value to customers through products or services. What You’ll Do Create materials to communicate strategic plans Analyze and manage data-driven initiatives to drive revenue growth Monitor and report on key performance metrics Identify and recommend new revenue strategies Research market trends and the competitive landscape to create recommendations for strategic pivots Partner with finance, marketing, and sales leaders to help create annual revenue plans Your Qualifications WHO YOU ARE: Knowledgeable, Analytical, and Intellectual 7 years’ experience at top tier consulting firm (e.g., Mckinsey , Bain, BCG, Deloitte) You are a problem-solver who can take the initiative to develop and implement innovative solutions You’ve got strong quantitative skills and are comfortable analyzing data sets, spotting trends and synthesizing relevant observations You like thinking outside the box to come up with innovative points of view Basic knowledge of Tableau, Salesforce, and SQL a plus Benefits Competitive salaries Medical insurance and healthcare benefits for you and your family Fully paid premiums for life insurance Flexible hours and PTO Mental wellness platform subscription Gym reimbursement Childcare reimbursement Group term life insurance Commitment To Diversity, Equity, And Inclusion At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
Posted 1 week ago
0 years
0 Lacs
India
Remote
Established in 2004, OLIVER is the world’s first and only specialist in designing, building, and running bespoke in-house agencies and marketing ecosystems for brands. We partner with over 300 clients in 40+ countries and counting. Our unique model drives creativity and efficiency, allowing us to deliver tailored solutions that resonate deeply with audiences. As a part of The Brandtech Group , we're at the forefront of leveraging cutting-edge AI technology to revolutionise how we create and deliver work. Our AI solutions enhance efficiency, spark creativity, and drive insightful decision-making, empowering our teams to produce innovative and impactful results. Job Title: Digital Designer (Motion) Role: Freelancer Duration: 5 months Location: India (Remote) About the role: OLIVER is looking to recruit a Digital Designer to work on-site with one of our key clients. The ideal candidate will have a strong integrated design background, with a deep knowledge of digital first advertising and creative. Good in After Affects or basic animation and video editing is a must. Reporting into the Design Team Lead for creative work related, the candidate will be partnering with the Lead in producing digital concepts and design to the client’s brief and exacting standards whilst positively influencing clients with their creative input in addition to undertaking and pitching new creative concepts. The candidate will be working on an account around all things digital design including, social media, E-commerce, creative ideation, artworking and offline design collateral. What you will be doing: Responsible for brand consistency across all outputs. Experience in CRM, digital and offline is desired. Producing short-form mobile first innovative digital content for the client’s websites, digital applications and social media channels Ability to work independently from creative concept to execution To be accountable for the work by the creative team ensuring all work are as per brand guideline and platform’s best practises. Together with the Design Team Lead, the candidate will work actively with all internal and external stakeholders to ensure the delivery of the highest level of client service – from brief, creative, design and production To work closely with the Design Team Lead to create strong concepts from the initial briefing To assist the Design Team Lead in pitching creative solutions in response to marketing strategies To manage the preparation of all finished artwork files that will comply with the correct output specifications To ensure all design work adheres to the best practises of digital and social trends and requirements Resourcing and scheduling of you own work Managing projects deliverables and key deadlines Supporting with BAU Design work Quality control Client relations, alternative point of contact on-site, supporting the Design Team Lead and Account Manager in the day to day relationships with key stakeholder Creative and quality oversight for work produced locally. Work with key clients to deliver the following types of projects: Social media and E-Commerce specific like Facebook, Lazada and YouTube On-site design updates (mostly posters, icons, logos and presentation slides) Support on visual identity and tone of voice for campaign materials including POS and OOH Merchandise design and production Support on one-off projects (eg brand day, anniversaries, social activities) Constant and pro-active branded assets optimisation throughout the company What you need to be great in this role: To be self-motivated, working with little supervision, communicating clearly with a line manager about own development needs. Multimedia arts graduate/field related Good client engagement skills with the ability to proactively organize and lead discussions with clients and build strong and effective working relationships with brand managers The ability to manage and filter workflow and prioritise workloads to maximise productivity with given timeline Ability to take and challenge a client’s brief for clarity Some exposure and knowledge of working directly with clients without account management support Experience of providing clear and accurate Management information. Creative ability with strong adobe CS (InDesign, Illustrator and Photoshop) Good in After Effects or basic animation and video editing A good multitasker Guardian of the client’s brand guidelines, constantly challenging and developing them Working knowledge of digital design and its requirements is a benefit Passion for and inquisitive about AI and new technologies Understanding and knowledge of AI tools is beneficial, but ability to learn and digest benefits and features of AI tools is critical Req ID: 13896 Our values shape everything we do: Be Ambitious to succeed Be Imaginative to push the boundaries of what’s possible Be Inspirational to do groundbreaking work Be always learning and listening to understand Be Results-focused to exceed expectations Be actively pro-inclusive and anti-racist across our community, clients and creations OLIVER, a part of the Brandtech Group, is an equal opportunity employer committed to creating an inclusive working environment where all employees are encouraged to reach their full potential, and individual differences are valued and respected. All applicants shall be considered for employment without regard to race, ethnicity, religion, gender, sexual orientation, gender identity, age, neurodivergence, disability status, or any other characteristic protected by local laws. OLIVER has set ambitious environmental goals around sustainability, with science-based emissions reduction targets. Collectively, we work towards our mission, embedding sustainability into every department and through every stage of the project lifecycle.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France