Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
7 - 12 Lacs
bengaluru
Work from Office
Essential Job Functions: Participate in data engineering tasks, including data processing and integration activities. Assist in the development and maintenance of data pipelines. Collaborate with team members to collect, process, and store data. Contribute to data quality assurance efforts and adherence to data standards. Use data engineering tools and techniques to analyze and generate insights from data. Collaborate with data engineers and other analysts on data-related projects. Seek out opportunities to enhance data engineering skills and domain knowledge. Stay informed about data engineering trends and best practices. Basic Qualifications: Bachelors degree in a relevant field or equivalent combination of education and experience Typically, 5+ years of relevant work experience in industry, with a minimum of 2 years in a similar role Proven experience in data engineering Proficiencies in data engineering tools and technologies A continuous learner that stays abreast with industry knowledge and technology Other Qualifications: Advanced degree in a relevant field a plus Relevant certifications, such as Oracle Certified Professional, MySQL Database Administrator a plus
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
hyderabad
Work from Office
Job Description: We are seeking a talented and experienced GenAI MLOps Engineer to join our dynamic team. The ideal candidate will have a strong background in machine learning operations, particularly with unstructured data model building on cloud platforms such as AWS, GCP, or Azure. Additionally, a solid understanding of Kubernetes (K8S) and proficiency in Python coding are highly desirable. Key Responsibilities: Design, implement, and maintain MLOps pipelines for deploying and managing machine learning models in production. Develop and optimize workflows for handling unstructured data, including text, images, and other non-tabular data formats. Collaborate with data scientists, compute/cloud engineers and other stakeholders to ensure seamless integration of machine learning models into production systems. Utilize cloud platforms (AWS, GCP, Azure) to build scalable and efficient machine learning solutions. Implement best practices for version control, continuous integration, and continuous deployment (CI/CD) of machine learning models. Monitor and troubleshoot production machine learning systems to ensure high availability and performance. Stay up-to-date with the latest advancements in MLOps, machine learning, and cloud technologies. Required Qualifications : Bachelors or Masters degree in Computer Science, Engineering, or a related field. 4-8 years of experience in MLOps, machine learning, or a related field. Strong proficiency in Python programming , with experience in libraries such as TensorFlow, PyTorch, Scikit-learn, etc. Experience with GenAI agent building frameworks such as AWS Bedrock, Azure Cognitive Services, Google Vertex AI, LangChain, and similar technologies. Proficiency in developing and deploying GenAI applications, including chatbots, retrieval-augmented generation (RAG), and other related technologies. Hands-on experience with cloud platforms (AWS, GCP, Azure) for deploying and managing machine learning models. Experience with unstructured data processing and model building. Knowledge of containerization and orchestration tools such as Docker and Kubernetes (K8S). Familiarity with CI/CD tools and practices. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication skills, both written and verbal. Preferred Qualifications: Experience with natural language processing (NLP) and computer vision techniques. Proficiency in Databricks for managing the entire machine learning lifecycle is highly desirable. Experience with infrastructure-as-code tools such as Terraform or CloudFormation. Familiarity with monitoring and logging tools such as Prometheus, Datadog, or Grafana.
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
bengaluru
Work from Office
Role: ML Engineer II Location: Bengaluru What you ll do We re MiQ, a global programmatic media partner for marketers and agencies. Our people are at the heart of everything we do, so you will be too. No matter the role or the location, we re all united in the vision to lead the programmatic industry and make it better. As a Machine Learning Engineer in our data science department, you ll have the chance to: Role Description : - A cross functional expert who bridges Data Science and Software Engineering skillsets to build scalable, production-ready AI solutions. Builds re-usable feature tables, ETL pipelines, and optimizes model training and inferencing for performance. Builds monitoring systems to enable rigorous tracking of ML powered solutions in production. What are your responsibilities By reducing model training and deployment time, improve the time to market for key business solutions. Develop re-usable components that accelerate rapid prototyping and deployment of solutions. Develop monitoring systems that track the performance and reliability of ML driven solutions. Help socialize MLOps capabilities across the organization by establishing best practices and standards. Who are your stakeholders The ML Engineer s internal customers / stakeholders include: Data Science, Product, and Engineering Stakeholders: You will work closely with the Data Science, Product, and Engineering stakeholders to build reliable, scalable and optimized ML pipelines, ensuring seamless deployment, monitoring of performance of ML solutions, and maintainability standards. Programmatic Traders, Analysts, Sales teams, Account Managers: ML solutions in production will be consumed as insights, recommendations, and campaign optimisations by personas that plan and manage programmatic campaigns and insights. What you ll bring 2 - 4 years of hands-on experience deploying and monitoring Machine Learning Models in production. Strong proficiency in Python and familiarity with distributed data processing frameworks like PySpark. Excellent grasp of the basics of Machine Learning, Statistics and Probability. Experience with building scalable ETL pipelines for efficient storage, processing and retrieval of features to support ML model development. Experience building and deploying in production end to end model training, inferencing and retraining pipelines . Experience with data and model monitoring to ensure consistency and reliability of solutions in production. Familiarity with standard best practices like version control and CI / CD for code management, unit testing and quality code standards. Experience working in cross functional teams, consisting of Data Scientists, Data Engineers and Product Managers to deliver quality solutions that meet business goals. Passion for self driven learning and an eagerness to keep up with continuously evolving MLOps / LLMOps technologies and best practices. [Bonus] Hands on experience with distributed data processing frameworks like Ray. [Bonus] Hands on experience with designing and maintaining Feature Stores. We ve highlighted some key skills, experience and requirements for this role. But please don t worry if you don t meet every single one. Our talent team strives to find the best people. They might see something in your background that s a fit for this role, or another opportunity at MiQ. If you have a passion for the role, please still apply. What s in it for you Our Center of Excellence is the very heart of MiQ, and it s where the magic happens. It means everything you do and everything you create will have a huge impact across our entire global business. MiQ is incredibly proud to foster a welcoming culture. We do everything possible to make sure everyone feels valued for what they bring. With global teams committed to diversity, equity, and inclusion, we re always moving towards becoming an even better place to work. Values Our values are so much more than statements . They unite MiQers in every corner of the world. They shape the way we work and the decisions we make. And they inspire us to stay true to ourselves and to aim for better. Our values are there to be embraced by everyone, so that we naturally live and breathe them. Just like inclusivity, our values flow through everything we do - no matter how big or small. We do what we love - Passion We figure it out - Determination We anticipate the unexpected - Agility We always unite - Unite We dare to be unconventional - Courage Benefits Every region and office have specific perks and benefits, but every person joining MiQ can expect: A hybrid work environment New hire orientation with job specific onboarding and training Internal and global mobility opportunities Competitive healthcare benefits Bonus and performance incentives Generous annual PTO paid parental leave, with two additional paid days to acknowledge holidays, cultural events, or inclusion initiatives. Employee resource groups designed to connect people across all MiQ regions, drive action, and support our communities. Apply today! Equal Opportunity Employer
Posted 1 week ago
2.0 - 7.0 years
4 - 9 Lacs
bengaluru
Work from Office
Building off our Cloud momentum, Oracle has formed a new organization - Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an energetic and creative environment. We are unencumbered and will need your contribution to make it a world class engineering center with the focus on excellence. Oracle Health Data Analytics has a rare opportunity to play a critical role in how Oracle Health products impact and disrupt the healthcare industry by transforming how healthcare and technology intersect. Career Level - IC2 As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. Define specifications for significant new projects and specify, design and develop software according to those specifications. You will perform professional software development tasks associated with the developing, designing and debugging of software applications or operating systems. Design and build distributed, scalable, and fault-tolerant software systems. Build cloud services on top of the modern OCI infrastructure. Participate in the entire software lifecycle, from design to development, to quality assurance, and to production. Invest in the best engineering and operational practices upfront to ensure our software quality bar is high. Optimize data processing pipelines for orders of magnitude higher throughput and faster latencies. Leverage a plethora of internal tooling at OCI to develop, build, deploy, and troubleshoot software. Qualifications 2+ years of experience in the software industry working on design, development and delivery of highly scalable products and services. Understanding of the entire product development lifecycle that includes understanding and refining the technical specifications, HLD and LLD of world-class products and services, refining the architecture by providing feedback and suggestions, developing, and reviewing code, driving DevOps, managing releases and operations. Strong knowledge of Java or JVM based languages. Experience with multi-threading and parallel processing. Strong knowledge of big data technologies like Spark, Hadoop Map Reduce, Crunch, etc. Past experience of building scalable, performant, and secure services/modules. Understanding of Micro Services architecture and API design Experience with Container platforms Good understanding of testing methodologies. Experience with CI/CD technologies. Experience with observability tools like Spunk, New Relic, etc
Posted 1 week ago
2.0 - 5.0 years
4 - 7 Lacs
coimbatore
Work from Office
About Responsive Responsive (formerly RFPIO) is the global leader in strategic response management software, transforming how organizations share and exchange critical information. The AI-powered Responsive Platform is purpose-built to manage responses at scale, empowering companies across the world to accelerate growth, mitigate risk and improve employee experiences. Nearly 2,000 customers have standardized on Responsive to respond to RFPs, RFIs, DDQs, ESGs, security questionnaires, ad hoc information requests and more. Responsive is headquartered in Portland, OR, with additional offices in Kansas City, MO and Coimbatore, India. Learn more at responsive.io. About the Role Responsive is looking for a product minded Software Engineer with strong technical skills and a passion for building scalable solutions. This is an opportunity to work in a fast-paced, innovative environment and contribute to the growth of a top-tier SaaS company. What You ll Be Doing Distributed Systems Development: Design, develop, and maintain Responsive application ensuring high performance, scalability, and reliability. Performance Tuning: Monitor and optimize performance, addressing bottlenecks and ensuring low-latency query responses. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in technologies, conducting research and experimentation to drive innovation. What We re Looking For Education: Bachelor s degree in Computer Science, Information Technology, or a related field. Experience: 2 to 5 years of experience in software design, development, and algorithm-related solutions using Java and related technologies. Skills, Qualifications & Ability: Strong proficiency in Java programming, Java design patterns, and server-side Java development. Demonstrated versatility in multiple front-end and back-end technologies such as Spring, React, MongoDB,, etc. Experience working with cloud platforms such as GCP, Azure, or AWS . Knowledge of cloud-native services for AI/ML, data storage, and processing. Expertise in search and retrieval technologies, including search platforms like ElasticSearch, Apache Solr, or similar, with experience in AI-driven solutions, Natural Language Processing (NLP), semantic search, and text processing techniques is a plus. Proficiency in Test Driven Development (TDD). Scrum and JIRA experience is a plus. Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. Why Join Us Impact-Driven Work: Build innovative solutions that redefine strategic response management. Collaborative Environment: Work with a passionate team of technologists, designers, and product leaders. Career Growth: Be part of a company that values learning and professional development. Competitive Benefits: We offer comprehensive compensation and benefits to support our employees. Trusted by Industry Leaders: Be part of a product that is trusted by world-leading organizations. Cutting-Edge Technology: Work on AI-driven solutions, cloud-native architectures, and large-scale data processing. Diverse and Inclusive Workplace: Collaborate with a global team that values different perspectives and ideas.
Posted 1 week ago
9.0 - 14.0 years
35 - 40 Lacs
bengaluru
Work from Office
About this opportunity: Join Ericsson as a Data Scientist. This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient and maintainable Python code to support data engineering tasks including collection, transformation and integration with ML models. Data Pipeline Development: Design, build and maintain robust data pipelines to gather, process and transform data from multiple sources into formats suitable for ML and analytics, leveraging ELK, Python and other leading technologies. Spark Knowledge: Apply core Spark concepts for distributed data processing where required, and optimize workflows for performance and scalability. ELK Integration: Implement ElasticSearch, Logstash and Kibana for data ingestion, indexing, search and real-time visualization. Knowledge of OpenSearch and related tooling is beneficial. Dashboards and Visualization: Create and manage Grafana and Kibana dashboards to deliver real-time insights into application and data performance. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. The skills you bring: Core Skills: Strong Python programming skills, experience building data pipelines, and knowledge of ELK stack (ElasticSearch, Logstash, Kibana). Distributed Processing: Familiarity with Spark fundamentals and when to leverage distributed processing for large datasets. Cloud & Containerization: Practical experience deploying applications and services on Kubernetes. Familiarity with Docker and container best practices. Monitoring & Visualization: Hands-on experience creating dashboards and alerts with Grafana and Kibana. ML & MLOps: Experience collaborating on ML model development, and deploying and monitoring ML models in production; knowledge of model monitoring, drift detection and CI/CD for ML is a plus. Experience criteria is 9 to 14years Primary country and city: India (IN) || Bangalore Req ID: 772044
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
bengaluru
Work from Office
As an SSE in our Technology department, you require Hands-on experience with Big Data technologies such as Databricks, Snowflake, EMR, Trino, Athena, StarTree, SageMaker Studio etc., with a strong foundation in data engineering concepts. Proficient in data processing and transformation using PySpark, SQL, and familiarity with at least one JVM-based language(Java/Scala/Kotlin). Familiarity with microservice integration in data systems; understands basic principles of interoperability and service communication. Solid experience in data pipeline development and familiarity with orchestration frameworks (e.g., Airflow, DBT, etc.), with an ability to build scalable and reliable ETL workflows. Exposure to MLOps/DataOps practices, with contributions to the rollout or maintenance of production pipelines. Knowledge of observability framework & and practices to support platform reliability and troubleshooting. Working knowledge of any observability tools (e.g., Prometheus, Grafana, Datadog) is highly desirable. Experience assisting in ETL optimization, platform issue resolution, and performance tuning in collaboration with other engineering teams. Good understanding of access management, including RBAC, ABAC, PBAC and familiarity with auditing & compliance basics. Practical experience with cloud infrastructure (AWS preferred), including EC2, S3, IAM, VPC basics, and Terraform or similar IaC tools. Understanding of CI/CD pipelines and ability to contribute to release automation, deployment strategies, and system testing. Interest in data governance, having working exposure to cataloging tools such as Unity Catalog, Amundsen, or Apache Atlas would be great Strong problem-solving skills with a collaborative mindset and passion for exploring AI tools, frameworks, and emerging technologies in the data space. Demonstrates ownership, initiative, and curiosity while contributing to research, platform improvements, and code quality standards. Aws, Data Bricks, Java, Microservices, Pyspark, Spring Boot, Sql
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
bengaluru
Work from Office
Staff Software Engineer, Backend, Bengaluru | Coupang Careers Description We exist to wow our customers. We know we re doing the right thing when we hear our customers say, How did we ever live without CoupangBorn out of an obsession to make shopping, eating, and living easier than ever, we re collectively disrupting the multi-billion-dollar e-commerce industry from the ground up. We are one of the fastest-growing e-commerce companies that established an unparalleled reputation for being a dominant and reliable force in South Korean commerce. We are proud to have the best of both worlds a startup culture with the resources of a large global public company. This fuels us to continue our growth and launch new services at the speed we have been at since our inception. We are all entrepreneurial surrounded by opportunities to drive new initiatives and innovations. At our core, we are bold and ambitious people that like to get our hands dirty and make a hands-on impact. At Coupang, you will see yourself, your colleagues, your team, and the company grow every day. Our mission to build the future of commerce is real. We push the boundaries of what s possible to solve problems and break traditional tradeoffs. Join Coupang now to create an epic experience in this always-on, high-tech, and hyper-connected world. Job Overview: As a Staff Software Engineer, Backend, you will work on distributed systems, data processing pipelines and building next generation platform and products. You will help the team to bring industry best practices in software development and operations while improving their engineering skills to build pioneering e-commerce experience in new global markets. Working closely with a group of engineers in multiple geographic locations, you ll solve challenge problems at scale with high reliability. Your efforts will directly have an impact on tens of millions of users every single day! Key Responsibilities: Drive the highest quality of architecture and design of data pipeline and system. Draw a roadmaps and vision for the scalable and robust growth of the online serving platform. Collaborate with other engineering teams to make the platform open and extensible to unlock innumerable opportunities for innovations. Align with stakeholders and lead engineers on mission critical projects. Decompose complex problems into simple, straightforward solutions. Possess expert knowledge in performance, scalability, and availability of data pipelines. Leverage knowledge of internal and industry best practices in design. Deep-dive and handle critical system issues. Collaborate with other teams to make the platform open and extensible to unlock innumerable opportunities for innovations. Qualifications: Bachelors degree and/or master s degree in computer science or equivalent. Minimum 8 years of experience working on software design and development in Java, Python. Hands-on experience with designing, building, and deploying scalable, high available data pipelines. Large system architecture design and development experience. Experience with cloud computing with AWS. Experience of developing container-based testing environment. Experience with Java / IntelliJ / Spring environment. We care about your privacy Thank you, your preferences have been updated. By clicking "Accept All," you agree to the storing of cookies on your device to give you the most optimal experience using our website. We may also use cookies to enhance performance, analyze site usage and to personalize your experience.
Posted 1 week ago
12.0 - 17.0 years
45 - 55 Lacs
pune
Work from Office
Company: Trinamix Inc. Position: GenAI Lead Experience: 12+ Years Location: Remote Employment Type: Full-Time About Us Trinamix Inc. is a leading global Oracle implementation and technology partner, recognized for driving digital transformation through innovative solutions in Supply Chain, Finance, Manufacturing, and emerging technologies. As we expand our portfolio, we are building a strong focus on Generative AI (GenAI) to create next-gen solutions for our clients worldwide. Role Overview We are seeking a highly experienced GenAI Lead to spearhead the design, development, and delivery of Generative AI solutions . This role requires a mix of strong technical expertise, leadership ability, and business acumen. The ideal candidate will lead a team of data scientists and engineers, collaborate with stakeholders to identify impactful use cases, and drive the successful deployment of AI models into production. Key Responsibilities Lead the end-to-end design and development of Generative AI models, applications, and frameworks . Collaborate with cross-functional teams and stakeholders to identify business use cases and define AI strategies. Manage and mentor a high-performing team of AI/ML engineers and data scientists. Oversee deployment of AI models in production and monitor performance, scalability, and security. Drive innovation and research in cutting-edge AI/ML technologies to enhance Trinamix s AI offerings. Define and implement best practices for AI development, testing, and deployment. Partner with leadership and delivery teams to ensure AI solutions align with business objectives. Act as a thought leader for GenAI adoption, internally and externally, through innovation showcases and client interactions. Skills & Qualifications 12+ years of overall experience with strong expertise in AI/ML, deep learning, and Generative AI . Hands-on experience with AI/ML frameworks such as TensorFlow, PyTorch, Keras. Strong programming skills in Python (R/Java is a plus). Knowledge of data processing/analytics tools like Pandas, NumPy, SQL. Proven expertise in cloud platforms (AWS, Azure, Google Cloud) for AI/ML model deployment. Demonstrated ability to lead AI delivery teams and manage complex projects. Excellent problem-solving, critical thinking, and stakeholder management skills. Strong communication and interpersonal skills to work with both technical and business stakeholders. Why Join Trinamix Lead AI-driven innovation in a global consulting environment. Collaborate with world-class clients and industry leaders. Opportunity to shape next-gen AI products and solutions . Be part of a supportive, forward-thinking, and innovative workplace culture.
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
bengaluru
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 48 years Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
kolkata
Work from Office
Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 48 years Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship
Posted 1 week ago
8.0 - 13.0 years
30 - 35 Lacs
pune
Work from Office
Join us to lead innovative AI solutions and elevate your career in data analytics. As a Quant Manager within the Data Analytics team, you will spearhead the design, development, and deployment of AI solutions for Chase Travel. You will lead a team to deliver high-quality data engineering and analytics solutions, focusing on natural language processes and AI impact evaluation. Job Responsibilities Lead and mentor a team of data engineers and analysts, fostering continuous learning. Design, develop, and maintain large-scale data processing pipelines for optimal performance. Collaborate with cross-functional teams to translate business objectives into AI solutions. Serve as a data expert, providing guidance to account teams and stakeholders. Develop advanced dashboards and reports using SAS Visual Analytics, QlikSense, and Tableau. Partner with AI Product teams to drive the AI roadmap and strategic decisions. Identify and collect critical data points for post-release analysis of AI initiatives. Required Qualifications, Capabilities, and Skills Bachelor s degree in Business Analytics, Data Science, Computer Science, or related field. Strong analytical skills with the ability to uncover insights from business and data contexts. 8+ years of experience in data models and analytic solutions, focusing on leadership roles. Extensive experience with ETL/ELT processes in cloud-based data lake platforms. Proficiency in SQL, Python, and Alteryx for data analysis and transformation. Expertise in SAS Visual Analytics, QlikSense, and Tableau. Exceptional communication skills for effective stakeholder collaboration. Proven ability to manage multiple large-scale projects simultaneously.
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
bengaluru
Work from Office
Not Applicable Specialism Data, Analytics & AI & Summary . Why PWC & Summary We are looking for a seasoned Azure Data Engineer Responsibilities Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and costeffectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Handson experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets Spark, Pyspark, Azure Preferred skill sets Spark, Pyspark, Azure Years of experience required 8+ Education qualification BE/B.Tech/MBA/MCA Education Degrees/Field of Study required Bachelor of Engineering, Master of Business Administration Degrees/Field of Study preferred Required Skills Spark Lighters Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Coaching and Feedback, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment {+ 21 more} Travel Requirements Available for Work Visa Sponsorship
Posted 1 week ago
2.0 - 6.0 years
4 - 8 Lacs
bengaluru
Work from Office
This website uses cookies to ensure you get the best experience. Syncron and our selected partners use cookies and similar technologies (together cookies ) that are necessary to present this website, and to ensure you get the best experience of it. If you consent to it, we will also use cookies for analytics and marketing purposes. You can withdraw and manage your consent at any time, by clicking Manage cookies at the bottom of each website page. Accept all cookies Decline all non-necessary cookies Select which cookies you accept On this site, we always set cookies that are strictly necessary, meaning they are necessary for the site to function properly. If you consent to it, we will also set other types of cookies. You can provide or withdraw your consent to the different types of cookies using the toggles below. You can change or withdraw your consent at any time, by clicking the link Manage Cookies , that is always available at the bottom of the site. These cookies are necessary to make the site work properly, and are always set when you visit the site. These cookies collect information to help us understand how the site is being used. These cookies are used to make advertising messages more relevant to you. In some cases, they also deliver additional functions on the site. Accept these cookies Decline all non-necessary cookies Software Engineer - Angular Supply Chain optimization, Service Fulfillment (e.g. warranty management, field service management, service parts management, knowledge management). With this we are winning the hearts and minds of world-leading organizations , such as JCB, Kubota, Electrolux, Toyota, Renault and Hitachi. What would you do Should be well-versed in Angular (v12+) Good command over RxJS, Observables, Redux with Angular (NgRx) or similar state management approaches. Unit testing and e2e testing experience are a must. Experience in building large applications, preferably from heavy data processing & analytics side Experience in JavaScript, Typescript, HTML5, and CSS3, with a good understanding of CSS preprocessors like LESS, SASS In-depth understanding of design patterns, OOPs, and Functional programming Good knowledge of at least one backend programming language (Node JS) Passionate programmer focused on UI Deep knowledge of Angular practices and commonly used modules based on extensive work experience Creating self-contained, reusable, and testable modules and components Ensuring a clear dependency chain, in regard to the app logic as well as the file system Ensuring to follow Atomic design pattern and BEM standards Expecting good knowledge in PWA with indexDB and service workers integrations. Thorough understanding of the responsibilities of the platform, database, API, caching layer, proxies, and other web services used in the system Writing non-blocking code Creating custom, general-use modules and components that extend the elements and modules of the core Angular stack Experience with Angular Material, Angular CDK and Cypress is a plus. Unsure if you meet all the job requirements but passionate about the role Apply anyway! Syncron values diversity and welcomes all Candidates, even those with non-traditional backgrounds. We believe in transferable skills and a shared passion for success! Respect. Flexibility. Growth. At Syncron, we re not just shaping the future of service lifecycle management - we re also cultivating a dynamic and innovative community of thinkers, doers and visionaries passionate about making a difference. The world is changing. Manufacturing companies are shifting from selling products to delivering services. And we are driving this transformation together with our Customers, by helping them reduce costs and manual processes. We are guiding them on their journey towards a fully connected service experience and making their brand stronger. al: to make the complex simple. Whistleblowing at Syncron Already working at Syncron Let s recruit together and find your next colleague.
Posted 1 week ago
4.0 - 9.0 years
6 - 11 Lacs
pune
Work from Office
We have an exciting opportunity for you to advance your career in AI and data analytics. As a Quant Analytics Associate within the Data Analytics team, you will design, develop, and deploy AI solutions for Chase Travel. You will collaborate with various teams to create AI applications, focusing on natural language processes and information extraction, to drive analytics insights and support the AI roadmap. Job Responsibilities Organize, update, and maintain data to support engineering and analytics functions. Develop enterprise data models and large-scale data processing pipelines. Collaborate with cross-functional teams to translate business objectives into AI solutions. Act as a data expert, providing insights and recommendations to account teams. Develop dashboards and reports using SAS Visual Analytics, QlikSense, and Tableau. Assist AI Product teams by providing analytics and guidance for the AI roadmap. Identify and track data points for post-release analysis of AI initiatives. Required Qualifications, Capabilities, and Skills Bachelor s degree in Business Analytics, Data Science, Computer Science, Information Systems, or related field. Strong analytical and problem-solving skills with attention to detail. 4+ years of experience in designing and implementing data models and analytic solutions. Experience with ETL/ELT processes in cloud-based data lake platforms like AWS, Azure, or Snowflake. Proficient in SQL, Python, and Alteryx for data analysis and engineering. Experience with data models and reporting packages like SAS Visual Analytics, QlikSense, or Tableau. Effective verbal and written communication skills.
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
pune
Work from Office
We have an exciting opportunity for you to advance your career in AI and data analytics. As a Quant Analytics Senior Associate within the Data Analytics team, you will design, develop, and deploy AI solutions for Chase Travel. You will collaborate with various teams to create AI applications, focusing on natural language processes and information extraction, to drive analytics insights and support the AI roadmap. Job Responsibilities Organize, update, and maintain data to support engineering and analytics functions. Develop enterprise data models and large-scale data processing pipelines. Collaborate with cross-functional teams to translate business objectives into AI solutions. Act as a data expert, providing insights and recommendations to account teams. Develop dashboards and reports using SAS Visual Analytics, QlikSense, and Tableau. Assist AI Product teams by providing analytics and guidance for the AI roadmap. Identify and track data points for post-release analysis of AI initiatives. Required Qualifications, Capabilities, and Skills Bachelor s degree in Business Analytics, Data Science, Computer Science, Information Systems, or related field. Strong analytical and problem-solving skills with attention to detail. 7+ years of experience in designing and implementing data models and analytic solutions. Experience with ETL/ELT processes in cloud-based data lake platforms like AWS, Azure, or Snowflake. Proficient in SQL, Python, and Alteryx for data analysis and engineering. Experience with data models and reporting packages like SAS Visual Analytics, QlikSense, or Tableau. Effective verbal and written communication skills.
Posted 1 week ago
6.0 - 11.0 years
8 - 13 Lacs
pune
Work from Office
We have an exciting opportunity for you to advance your career in AI and data analytics. As a Quant Analytics Senior Associate within the Data Analytics team, you will design, develop, and deploy AI solutions for Chase Travel. You will collaborate with various teams to create AI applications, focusing on natural language processes and information extraction, to drive analytics insights and support the AI roadmap. Job Responsibilities Organize, update, and maintain data to support engineering and analytics functions. Develop enterprise data models and large-scale data processing pipelines. Collaborate with cross-functional teams to translate business objectives into AI solutions. Act as a data expert, providing insights and recommendations to account teams. Develop dashboards and reports using SAS Visual Analytics, QlikSense, and Tableau. Assist AI Product teams by providing analytics and guidance for the AI roadmap. Identify and track data points for post-release analysis of AI initiatives. Required Qualifications, Capabilities, and Skills Bachelor s degree in Business Analytics, Data Science, Computer Science, Information Systems, or related field. Strong analytical and problem-solving skills with attention to detail. 6+ years of experience in designing and implementing data models and analytic solutions. Experience with ETL/ELT processes in cloud-based data lake platforms like AWS, Azure, or Snowflake. Proficient in SQL, Python, and Alteryx for data analysis and engineering. Experience with data models and reporting packages like SAS Visual Analytics, QlikSense, or Tableau. Effective verbal and written communication skills.
Posted 1 week ago
5.0 - 10.0 years
7 - 12 Lacs
hyderabad
Work from Office
Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Ensuring AGILE delivery practice and ways of working Front and back end responsibility for product / application development Co-ordinates release deployment and supports systems in production Interpret requirements, provide estimations and participate in agile development project lifecycles. End to end accountability for a GT IT product or service, identifying and developing the most appropriate Technology solutions to meet customer needs as part of the Customer Journey. Troubleshoot defects and come up with sustainable solutions Working with Ops, Development and Test Engineers to ensure operational are identified and addressed at all stages of a product or service release / change. Work closely with ITSO, Agile Leads & Value Stream Lead to ensure the IT solutions and project plans meet business needs and time to market. Ensure service resilience, service sustainability and recovery time objectives are met for the software solutions delivered. Responsible for automating the continuous integration / continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement. Challenge where appropriate, decisions made on control implementation. Requirements 5+ years of experience in a Regulatory / IT environment. Experience in Finance IT desirable Expertise in designing, developing, and maintaining ETL (Extract, Transform, Load) processes using IBM DataStage to integrate and manage data. Expertise in IT solution development specific and DevOps automation tools. Expertise in identifying and resolving performance bottlenecks in existing jobs to ensure efficient data processing. Diagnose and fix issues related to DataStage jobs, ensuring data integrity and minimal downtime. Expertise in creating and maintaining documentation for DataStage jobs, including design specifications, job configurations, and troubleshooting procedures. Experience in implementing data validation and testing procedures to ensure the accuracy and reliability of data loaded into the system. Familiarity with relational databases like Oracle or DB2, or SQL Server, including SQL knowledge. Excellent communication skills able to articulate at all the levels, to peers at team various levels. Keen attention to detail, and demonstrable ownership of problems and tasks. A drive and enthusiasm to deliver excellent service and continual improvement. .
Posted 1 week ago
9.0 - 10.0 years
35 - 40 Lacs
gurugram
Work from Office
Join Team Amex and lets lead the way together. With a focus on digitization, innovation, and analytics, the Enterprise Digital teams creates central, scalable platforms and customer experiences to help markets across all of these priorities. Charter is to drive scale for the business and accelerate innovation for both immediate impact as well as long-term transformation of our business. A unique aspect of Enterprise Digital Teams is the integration of diverse skills across all its remit. Enterprise Digital Teams has a very broad range of responsibilities, resulting in a broad range of initiatives around the world. The American Express Enterprise Digital Experimentation & Analytics (EDEA) leads the Enterprise Product Analytics and Experimentation charter for Brand & Performance Marketing and Digital Acquisition & Membership experiences as well as Enterprise Platforms. The focus of this collaborative team is to drive growth by enabling efficiencies in paid performance channels & evolve our digital experiences with actionable insights & analytics. The team specializes in using data around digital product usage to drive improvements in the acquisition customer experience to deliver higher satisfaction and business value. About this Role: This role will report to the Manager of International Acquisition experience analytics team within Enterprise Digital Experimentation & Analytics (EDEA) and will be based in Gurgaon. The candidate will be responsible for delivery of highly impactful analytics to optimize our Digital Membership Experiences across Web & App channels. - Deliver strategic analytics focused on Digital Membership experiences across Web & App aimed at optimizing our Customer experiences - Define and build key KPIs to monitor the acquisition journey performance and success - Support the development of new products and capabilities - Deliver read out of experiments uncovering insights and learnings that can be utilized to further optimize the customer journey - Gain deep functional understanding of the enterprise-wide product capabilities and associated platforms over time and ensure analytical insights are relevant and actionable - Power in-depth strategic analysis and provide analytical and decision support by mining digital activity data along with AXP closed loop data Minimum Qualifications - Advanced degree in a quantitative field (e.g. Finance, Engineering, Mathematics, Computer Science) - Strong programming skills are preferred. Some experience with Big Data programming languages (Hive, Spark), Python, SQL. - Experience in large data processing and handling, understanding in data science is a plus. - Ability to work in a dynamic, cross-functional environment, with strong attention to detail. - Excellent communication skills with the ability to engage, influence, and encourage partners to drive collaboration and alignment. Preferred Qualifications - Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to senior leaders/partners in a succinct and concise manner. - Basic knowledge of statistical techniques for experimentation & hypothesis testing, regression, t-test, chi-square test.
Posted 1 week ago
4.0 - 9.0 years
20 - 35 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon /Pune/Bangalore Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.
Posted 1 week ago
5.0 - 10.0 years
15 - 27 Lacs
pune, chennai
Hybrid
Education: Engg Background. Python with Pandas– Strong foundation and applied skills for automation, data processing, API Gen AI basics- Knowledge on RAG, Agentic AI framework, Rest API- consume and build Api’s Rag and Agentic AI, Creative Thinking.
Posted 1 week ago
2.0 - 6.0 years
7 - 11 Lacs
bengaluru
Work from Office
About The Role This is an Internal document. Job TitleSenior Data Engineer About The Role As a Senior Data Engineer, you will play a key role in designing and implementing data solutions @Kotak811. — You will be responsible for leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams to deliver high-quality and scalable data infrastructure. — Your expertise in data architecture, performance optimization, and data integration will be instrumental in driving the success of our data initiatives. Responsibilities 1. Data Architecture and Designa. Design and develop scalable, high-performance data architecture and data models. b. Collaborate with data scientists, architects, and business stakeholders to understand data requirements and design optimal data solutions. c. Evaluate and select appropriate technologies, tools, and frameworks for data engineering projects. d. Define and enforce data engineering best practices, standards, and guidelines. 2. Data Pipeline Development & Maintenancea. Develop and maintain robust and scalable data pipelines for data ingestion, transformation, and loading for real-time and batch-use-cases b. Implement ETL processes to integrate data from various sources into data storage systems. c. Optimise data pipelines for performance, scalability, and reliability. i. Identify and resolve performance bottlenecks in data pipelines and analytical systems. ii. Monitor and analyse system performance metrics, identifying areas for improvement and implementing solutions. iii. Optimise database performance, including query tuning, indexing, and partitioning strategies. d. Implement real-time and batch data processing solutions. 3. Data Quality and Governancea. Implement data quality frameworks and processes to ensure high data integrity and consistency. b. Design and enforce data management policies and standards. c. Develop and maintain documentation, data dictionaries, and metadata repositories. d. Conduct data profiling and analysis to identify data quality issues and implement remediation strategies. 4. ML Models Deployment & Management (is a plus) This is an Internal document. a. Responsible for designing, developing, and maintaining the infrastructure and processes necessary for deploying and managing machine learning models in production environments b. Implement model deployment strategies, including containerization and orchestration using tools like Docker and Kubernetes. c. Optimise model performance and latency for real-time inference in consumer applications. d. Collaborate with DevOps teams to implement continuous integration and continuous deployment (CI/CD) processes for model deployment. e. Monitor and troubleshoot deployed models, proactively identifying and resolving performance or data-related issues. f. Implement monitoring and logging solutions to track model performance, data drift, and system health. 5. Team Leadership and Mentorshipa. Lead data engineering projects, providing technical guidance and expertise to team members. i. Conduct code reviews and ensure adherence to coding standards and best practices. b. Mentor and coach junior data engineers, fostering their professional growth and development. c. Collaborate with cross-functional teams, including data scientists, software engineers, and business analysts, to drive successful project outcomes. d. Stay abreast of emerging technologies, trends, and best practices in data engineering and share knowledge within the team. i. Participate in the evaluation and selection of data engineering tools and technologies. Qualifications1. 3-5 years"™ experience with Bachelor's Degree in Computer Science, Engineering, Technology or related field required 2. Good understanding of streaming technologies like Kafka, Spark Streaming. 3. Experience with Enterprise Business Intelligence Platform/Data platform sizing, tuning, optimization and system landscape integration in large-scale, enterprise deployments. 4. Proficiency in one of the programming language preferably Java, Scala or Python 5. Good knowledge of Agile, SDLC/CICD practices and tools 6. Must have proven experience with Hadoop, Mapreduce, Hive, Spark, Scala programming. Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. 7. Proven experience in development of conceptual, logical, and physical data models for Hadoop, relational, EDW (enterprise data warehouse) and OLAP database solutions. 8. Good understanding of distributed systems 9. Experience working extensively in multi-petabyte DW environment 10. Experience in engineering large-scale systems in a product environment
Posted 1 week ago
1.0 - 5.0 years
2 - 4 Lacs
bengaluru
Work from Office
About The Role Project Role : Data & Document Mmgt Processor Project Role Description :Perform end to end document management services according to service level agreements. This includes data digitization, data indexing, document scanning and maintenance etc. Support initiatives with a focus on continuous improvement. Must have skills : Business Requirements Analysis Good to have skills : AWS ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time educationMust Have Skills: 7+ years experience in Capital Markets domain (Asset & Wealth management:front office processes, including research, ratings, portfolio management and trading)Core BA Skills requirement elicitation, impact analysis, requirement documentation, user stories creation, DOD, Working with PO finalizing PB, test support, business readiness along with JIRA + Confluence know-howStrong communication skillsUnderstanding of Investment Data & Domain expertise . Ability to query databases and data sources to support requirements gathering approach (SQL, AWS) Qualification 15 years full time education
Posted 1 week ago
1.0 - 5.0 years
2 - 4 Lacs
bengaluru
Work from Office
About The Role Project Role : Data & Document Mmgt Processor Project Role Description :Perform end to end document management services according to service level agreements. This includes data digitization, data indexing, document scanning and maintenance etc. Support initiatives with a focus on continuous improvement. Must have skills : Business Requirements Analysis, Resource should know about J2E Good to have skills : AWS ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time educationMust Have Skills: 7+ years experience in Capital Markets domain (Asset & Wealth management:front office processes, including research, ratings, portfolio management and trading)Core BA Skills requirement elicitation, impact analysis, requirement documentation, user stories creation, DOD, Working with PO finalizing PB, test support, business readiness along with JIRA + Confluence know-howStrong communication skillsUnderstanding of Investment Data & Domain expertise . Ability to query databases and data sources to support requirements gathering approach (SQL, AWS) Qualification 15 years full time education
Posted 1 week ago
0.0 years
0 - 2 Lacs
mangaluru
Work from Office
Walkin drive for "Only BSC freshers batch 2022 - 2025(No computer science background) at Mangalore on 6th Sep 2025 Greeting from Infosys BPM Ltd., You are kindly invited for the Infosys BPM: Walk-In Drive on 6th Sep 2025 at Mangalore. Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please mention Candidate ID on top of the Resume https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-224900 Interview Information: Interview Date: 6th Sep 2025 Interview Time: 09:30 Am till 12:30 Pm Interview Venue: Infosys BPM, Kamblapadavu, Kurnadu Post, Mudipu, Ullal Taluk, Mangalore- 574153 Documents to Carry: Please carry 2 set of updated CV(Hard Copy). Please carry Face Mask**. Mandatory to carry PAN Card or Passport for Identity proof. NOTE: Candidates Needs to bring Pan card without fail for Assessment. Interview Information: Interview Date: 6th Sep 25. Reporting Time: 09:30 AM till 11:00 AM Round 1 - Aptitude Assessment (10:00 AM to 12:00 PM) Round 2 - Ops Screening Face to Face interview (12:30 PM to 04:00 PM) Note - Post 11:30 AM (entry not allowed) Job Description: Job Location : Mangalore Qualification : BSC Freshers (Non-Computer science background Only these graduates are eligible for interview) Shifts: Night Shift Notice Period : Immediate joiners only Roles & Responsibilities Candidate needs to have 15 years of full-time education Proficient with basic computer knowledge Excellent website research & navigation skills Should be good in reading/understanding/interpretation of the content Candidate should be flexible to work in 24*7 environments, comfortable to work in night shifts (Rotational) Excellent verbal, written communication, interpretation and active listening skills Should have good command over English Grammar and Fluency in English Should be able to manage outbound calls in a timely manner following the scripts when handling different subjects/scenarios Ability to quickly and efficiently assimilate process knowledge Effective probing & analyzing skills and capable of doing a multi-tasking of voice & data entry No long leave in plan for at least next 1 year Work will be from office only (No WFH) Regards, Infosys BPM Recruitment team
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |