Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Engineer at our company, you will play a crucial role in designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your expertise in PySpark, Snowflake, and AWS will be key as you leverage these technologies for data processing and analytics. Your responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, and managing and configuring various AWS services such as S3, Lambda, Glue, EMR, and Redshift. Collaboration with data analysts and business teams to understand requirements and deliver solutions will be essential, along with ensuring data security and compliance with best practices in AWS and Snowflake environments. Monitoring and troubleshooting data pipelines and workflows for performance and reliability, as well as writing efficient, reusable, and maintainable code for data processing and transformation, will also be part of your role. To excel in this position, you should have strong experience with AWS services like S3, Lambda, Glue, and MSK, proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, and a solid understanding of SQL and database optimization techniques. Knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems like Git, as well as strong problem-solving and debugging skills are also required. Experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, and understanding of data governance and security best practices will be beneficial. Certification in AWS or Snowflake is a plus. You should hold a Bachelor's degree in Computer Science, Engineering, or a related field with 6 to 10 years of experience, including 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. Join us in our Technology team as a valuable member of the Digital Software Engineering job family, working full-time to contribute your most relevant skills while continuously growing and expanding your expertise.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
You are a Data Engineer with 3+ years of experience, proficient in SQL and Python development. You will be responsible for designing, developing, and maintaining scalable data pipelines to support ETL processes using tools like Apache Airflow, AWS Glue, or similar. Your role involves optimizing and managing relational and NoSQL databases such as MySQL, PostgreSQL, MongoDB, or Cassandra for high performance and scalability. You will write advanced SQL queries, stored procedures, and functions to efficiently extract, transform, and analyze large datasets. Additionally, you will implement and manage data solutions on cloud platforms like AWS, Azure, or Google Cloud, utilizing services such as Redshift, BigQuery, or Snowflake. Your contributions to designing and maintaining data warehouses and data lakes will support analytics and BI requirements. Automation of data processing tasks through script and application development in Python or other programming languages is also part of your responsibilities. As a Data Engineer, you will implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions is essential. Identifying and resolving performance bottlenecks in data systems, optimizing data storage, and retrieval are key aspects. Maintaining comprehensive documentation for data processes, pipelines, and infrastructure is crucial. Staying up-to-date with the latest trends in data engineering, big data technologies, and cloud services is expected from you. You should hold a Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field. Proficiency in SQL, relational databases, NoSQL databases, Python programming, and experience with data pipeline tools and cloud platforms is required. Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Strong analytical and problem-solving skills with a focus on performance optimization and scalability are essential. Excellent verbal and written communication skills are necessary to convey technical concepts to non-technical stakeholders. You should be able to work collaboratively in cross-functional teams. Preferred certifications include AWS Certified Data Analytics, Google Professional Data Engineer, or similar. An eagerness to learn new technologies and adapt quickly in a fast-paced environment is a mindset that will be valuable in this role.,
Posted 1 month ago
2.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You should have 3 to 10 years of experience in AI development and be located in Coimbatore. Immediate joiners are preferred. A minimum of 2 years of experience in core Gen AI is required. As an AI Developer, your responsibilities will include designing, developing, and fine-tuning Large Language Models (LLMs) for various in-house applications. You will implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Additionally, you will develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Building and managing data pipelines for processing, transforming, and feeding structured/unstructured data into AI models will be part of your role. It is essential to ensure scalability, performance, and security of AI-driven solutions in production environments. Collaboration with cross-functional teams, including data engineers, software developers, and product managers, is expected. You will conduct experiments and evaluations to improve AI system accuracy and efficiency while staying updated with the latest advancements in AI/ML research, open-source models, and industry best practices. You should have strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases such as Pinecone, ChromaDB, Weaviate, OpenSearch, and FAISS, is required. Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks is preferred. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow is necessary. Experience in Python web frameworks such as FastAPI, Django, or Flask is expected. You should also have experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes) is essential. Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications is a plus. A strong understanding of vector search, embedding models, and hybrid retrieval techniques is required. Experience with optimizing inference and serving AI models in real-time production systems is beneficial. Experience with multi-modal AI (text, image, audio) and familiarity with privacy-preserving AI techniques and responsible AI frameworks are desirable. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation, is a plus. Skills required for this role include PyTorch, RAG architectures, OpenSearch, Weaviate, Docker, LLM fine-tuning, ChromaDB, Apache Airflow, LoRA, Python, hybrid retrieval techniques, Django, GCP, CrewAI, OpenAI, Hugging Face, Gen AI, Pinecone, FAISS, AWS, AutoGPT, embedding models, Flask, FastAPI, LLM APIs, DeepSpeed, vector search, PEFT, LangChain, Azure, Spark, Kubernetes, AI Gen, TensorFlow, real-time production systems, LangGraph, and Kafka.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Cloud Developer specializing in Google Cloud Platform (GCP), Python, and Apache Airflow, you will be responsible for designing and developing cloud-native applications and APIs. Your expertise in deploying microservices on GCP Cloud Run, utilizing Docker for containerization, and managing data workflows with Apache Airflow will be crucial in optimizing scalable cloud services. Collaborating with cross-functional teams in an Agile environment, you will ensure the development, maintenance, and security of cloud infrastructure using various GCP services. Your key responsibilities will include designing and developing cloud-native applications and APIs using Python and frameworks like Django, FastAPI, or Flask. You will deploy microservices on GCP Cloud Run, implement data workflows with Apache Airflow, and secure cloud infrastructure using GCP services such as Compute Engine, BigQuery, and Cloud Storage. Additionally, you will apply best practices in GCP security, collaborate with cross-functional teams, and optimize data pipelines for reliability and observability. To excel in this role, you must possess strong experience with GCP core services, proficiency in Python and frameworks, hands-on experience with Docker and Cloud Run, and a good understanding of GCP IAM and security practices. Experience with SQL, API Gateway tools, Agile development practices, and cross-functional collaboration will be essential. Nice-to-have skills include experience with CI/CD pipelines, DevOps practices, serverless architecture in GCP, and exposure to Dataflow, Pub/Sub, and BigQuery in production environments. Your problem-solving mindset, willingness to learn emerging cloud technologies, strong communication skills, and passion for automation and scalable architecture will be valuable assets in this role. Key Qualifications: - Strong experience with GCP and its core services. - Proficiency in Python and frameworks like Django, FastAPI, or Flask. - Proven experience with Apache Airflow for workflow orchestration. - Hands-on experience with Docker and Cloud Run. - Good understanding of GCP IAM and security best practices. - Experience with SQL for data manipulation and analysis. - Basic working knowledge of API Gateway tools. - Familiarity with Agile development practices and cross-functional collaboration. If you are looking for a challenging role where you can leverage your expertise in GCP, Python, and Apache Airflow to contribute to the development of scalable cloud services, this position as a Cloud Developer may be the perfect fit for you.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
maharashtra
On-site
As a Data Analyst in our dynamic fintech environment, your primary responsibility will be to extract, clean, and analyze large datasets to identify patterns and trends. You will play a crucial role in developing and maintaining dashboards and reports to monitor business performance. Collaborating with cross-functional teams, you will work towards improving data accuracy and accessibility. Conducting deep-dive analysis on customer behavior, transactions, and engagement will enable you to enhance retention and acquisition strategies. Additionally, you will be expected to identify potential risks, anomalies, and growth opportunities using data. To excel in this role, you should hold a Bachelor's or Master's degree in Data Science, Statistics, Mathematics, Economics, Computer Science, or a related field. A minimum of 1 year of domain exposure in Banking, Lending, or Fintech verticals is required. Proficiency in Python for data analysis, including EDA, feature engineering, and predictive analytics, is essential. You should also possess expertise in SQL for data querying and transformation, as well as mandatory experience in Tableau for building executive dashboards and visual storytelling. While not mandatory, exposure to Apache Airflow for the orchestration of ETL workflows, cloud platforms such as AWS or GCP, and version control tools like Git or Bitbucket would be beneficial. This role offers the opportunity to work on real-time data pipelines, credit risk models, and customer lifecycle analytics.,
Posted 1 month ago
5.0 - 10.0 years
5 - 15 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
We are seeking an experienced Apache Airflow Subject Matter Expert (SME), (Contract , Remote - India ) to join our Data Engineering team . You will be responsible for optimizing Airflow environments, building scalable orchestration frameworks, and supporting enterprise-scale data pipelines, while collaborating with cross-functional teams. Skills: Optimize and fine-tune existing Apache Airflow environments, addressing performance and reliability. Design and develop scalable, modular, and reusable Airflow DAGs for complex data workflows. Integrate Airflow with cloud-native services such as data factories, compute platforms, storage, and analytics . Develop and maintain CI/CD pipelines for DAG deployment, testing, and release automation. Implement monitoring, alerting, and logging standards to ensure operational excellence. Provide architectural guidance and hands-on support for new data pipeline development. Document Airflow configurations, deployment processes, and operational procedures. Mentor engineers and lead knowledge-sharing on orchestration best practices. Expertise in Airflow internals , including schedulers, executors (Celery, Kubernetes), and plugins. Experience with autoscaling solutions (KEDA) and Celery for distributed task execution. Strong hands-on skills in Python programming and modular code development . Proficiency with cloud services (Azure, AWS, or GCP), including data pipelines, compute, and storage. Solid experience with CI/CD tools such as Azure DevOps, Jenkins, or GitHub Actions. Familiarity with Docker, Kubernetes, and related deployment technologies. Strong background in monitoring tools (Prometheus, Grafana) and log aggregation (ELK, Log Analytics). Excellent problem-solving, communication, and collaboration skills. Interested? Please send your updated CV to jobs.india@pixelcodetech.com and a member of our resource team will be in touch.
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values, and Leadership Behaviors, with an unwavering commitment to supporting our customers, communities, and colleagues. As a member of Team Amex, you will receive comprehensive support for your holistic well-being and numerous opportunities to enhance your skills, develop leadership qualities, and advance your career. Your voice and ideas hold significance here, making a tangible impact as we collectively shape the future of American Express. Enterprise Architecture, situated within the Chief Technology Office at American Express, plays a crucial role as a key enabler of the company's technology strategy. This organization focuses on four primary pillars: - Architecture as Code: Responsible for managing foundational technologies utilized by engineering teams across the enterprise. - Architecture as Design: Involves solution and technical design for transformation programs and critical projects requiring architectural guidance. - Governance: Defines technical standards and develops innovative tools to automate controls for ensuring compliance. - Colleague Enablement: Concentrates on colleague development, recognition, training, and enterprise outreach. As part of the team, your responsibilities will include: - Designing, developing, and ensuring the scalability, security, and resilience of applications and data pipelines. - Providing architectural guidance and documentation to support regulatory audits when necessary. - Contributing to enterprise architecture initiatives, domain reviews, and solution architecture. - Promoting innovation by exploring new tools, frameworks, and design methodologies. To qualify for this role, we are seeking candidates with the following qualifications: - Ideally possess a BS or MS degree in computer science, computer engineering, or a related technical discipline. - Minimum of 6 years of software engineering experience with a strong proficiency in Java and Node.js. - Experience with Python and workflow orchestration tools like Apache Airflow is highly desirable. - Demonstrated expertise in designing and implementing distributed systems and APIs. - Familiarity with cloud platforms such as GCP, AWS, and modern CI/CD pipelines. - Ability to articulate clear architectural documentation and present ideas concisely. - Proven success working collaboratively in a cross-functional, matrixed environment. - Passion for innovation, problem-solving, and driving technology modernization. - Preferred experience with microservices architectures and event-driven architecture. American Express provides benefits that cater to your holistic well-being, ensuring you can perform at your best. These benefits include competitive base salaries, bonus incentives, support for financial well-being and retirement, comprehensive medical, dental, vision, life insurance, and disability benefits, flexible working models, generous paid parental leave policies, access to global on-site wellness centers, confidential counseling support through the Healthy Minds program, and career development and training opportunities. Please note that an offer of employment with American Express is subject to the successful completion of a background verification check, as per applicable laws and regulations.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Specialist, you will be responsible for utilizing your expertise in ETL Fundamentals, SQL, BigQuery, Dataproc, Python, Data Catalog, Data Warehousing, and various other tools to contribute to the successful implementation of data projects. Your role will involve working with technologies such as Cloud Trace, Cloud Logging, Cloud Storage, and Datafusion to build and maintain a modern data platform. To excel in this position, you should possess a minimum of 5 years of experience in the data engineering field, with a focus on GCP cloud data implementation suite including BigQuery, Pub Sub, Data Flow/Apache Beam, Airflow/Composer, and Cloud Storage. Your strong understanding of very large-scale data architecture and hands-on experience in data warehouses, data lakes, and analytics platforms will be crucial for the success of our projects. Key Requirements: - Minimum 5 years of experience in data engineering - Hands-on experience in GCP cloud data implementation suite - Strong expertise in GBQ Query, Python, Apache Airflow, and SQL (BigQuery preferred) - Extensive hands-on experience with SQL and Python for working with data If you are passionate about data and have a proven track record of delivering results in a fast-paced environment, we invite you to apply for this exciting opportunity to be a part of our dynamic team.,
Posted 1 month ago
2.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
You are an experienced Data Engineer with expertise in PySpark, Snowflake, and AWS, and you will be responsible for designing, developing, and optimizing data pipelines and workflows in a cloud-based environment. Your main focus will be leveraging AWS services, PySpark, and Snowflake for data processing and analytics. Your key responsibilities will include designing and implementing scalable ETL pipelines using PySpark on AWS, developing and optimizing data workflows for Snowflake integration, managing and configuring AWS services such as S3, Lambda, Glue, EMR, and Redshift, collaborating with data analysts and business teams to understand requirements and deliver solutions, ensuring data security and compliance with best practices in AWS and Snowflake environments, monitoring and troubleshooting data pipelines and workflows for performance and reliability, and writing efficient, reusable, and maintainable code for data processing and transformation. Required skills for this role include strong experience with AWS services (S3, Lambda, Glue, MSK, etc.), proficiency in PySpark for large-scale data processing, hands-on experience with Snowflake for data warehousing and analytics, a solid understanding of SQL and database optimization techniques, knowledge of data lake and data warehouse architectures, familiarity with CI/CD pipelines and version control systems (e.g., Git), strong problem-solving and debugging skills, experience with Terraform or CloudFormation for infrastructure as code, knowledge of Python for scripting and automation, familiarity with Apache Airflow for workflow orchestration, understanding of data governance and security best practices, and a certification in AWS or Snowflake is a plus. For education and experience, a Bachelors degree in Computer Science, Engineering, or related field with 6 to 10 years of experience is required, along with 5+ years of experience in AWS cloud engineering and 2+ years of experience with PySpark and Snowflake. This position falls under the Technology Job Family Group and the Digital Software Engineering Job Family, and it is a full-time role.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for fetching and transforming data from various systems, conducting in-depth analyses to identify gaps, opportunities, and insights, and providing recommendations that support strategic business decisions. Your key responsibilities will include data extraction and transformation, data analysis and insight generation, visualization and reporting, collaboration with cross-functional teams, and building strong working relationships with external stakeholders. You will report to the VP Business Growth and work closely with clients. To excel in this role, you should have proficiency in SQL for data querying and Python for data manipulation and transformation. Experience with data engineering tools such as Spark and Kafka, as well as orchestration tools like Apache NiFi and Apache Airflow, will be essential for ETL processes and workflow automation. Expertise in data visualization tools such as Tableau and Power BI, along with strong analytical skills including statistical techniques, will be crucial. In addition to technical skills, you should possess soft skills such as flexibility, excellent communication skills, business acumen, and the ability to work independently as well as within a team. Your academic qualifications should include a Bachelors or Masters degree in Applied Mathematics, Management Science, Data Science, Statistics, Econometrics, or Engineering. Extensive experience in Data Lake architecture, building data pipelines using AWS services, proficiency in Python and SQL, and experience in the banking domain will be advantageous. Overall, you should demonstrate high motivation, a good work ethic, maturity, personal initiative, and strong oral and written communication skills to succeed in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act swiftly for immediate attention! With over 5 years of experience, the successful candidate will have the following roles and responsibilities: - Designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). - Constructing data ingestion and transformation frameworks for both structured and unstructured data sources. - Collaborating with data analysts, data scientists, and business stakeholders to comprehend requirements and deliver reliable data solutions. - Handling large volumes of data while ensuring quality, integrity, and consistency. - Optimizing data workflows for enhanced performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. - Implementing data quality checks and automation for ETL/ELT pipelines. - Monitoring and troubleshooting data issues in production environments and conducting root cause analysis. - Documenting technical processes, system designs, and operational procedures. Key Skills Required: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Proficiency with PySpark or Spark using Scala. - Strong grasp of SQL for data querying and transformation purposes. - Previous experience working with any cloud platform (AWS, Azure, or GCP). - Sound understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Desired Skills: - Exposure to data orchestration tools such as Apache Airflow, Databricks Workflows, or equivalent. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Experience with CI/CD practices and familiarity with DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer II in our team, you will be responsible for managing the deprecation of migrated workflows and ensuring the seamless migration of workflows into new systems. Your expertise in building and maintaining scalable data pipelines, both on-premises and on the cloud, will be crucial. You should have a deep understanding of input and output data sources, upstream downstream dependencies, and data quality assurance. Proficiency in tools like Git, Apache Airflow, Apache Spark, SQL, data migration, and data validation is essential for this role. Your key responsibilities will include: Workflow Deprecation: - Evaluate current workflows" dependencies and consumption for deprecation. - Identify, mark, and communicate deprecated workflows using tools and best practices. Data Migration: - Plan and execute data migration tasks ensuring accuracy and completeness. - Implement strategies for accelerating data migration pace and ensuring data readiness. Data Validation: - Define and implement data validation rules for accuracy and reliability. - Monitor data quality using validation solutions and anomaly detection methods. Workflow Management: - Schedule, monitor, and automate data workflows using Apache Airflow. - Develop and manage Directed Acyclic Graphs (DAGs) in Airflow for complex data processing tasks. Data Processing: - Develop and maintain data processing scripts using SQL and Apache Spark. - Optimize data processing for performance and efficiency. Version Control: - Collaborate using Git for version control and manage the codebase effectively. - Ensure code quality and repository management best practices. Continuous Improvement: - Stay updated with the latest data engineering technologies. - Enhance performance and reliability by improving and refactoring data pipelines and processes. Skills and Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - Proficient in Git, SQL, and database technologies. - Experience in Apache Airflow and Apache Spark for data processing. - Knowledge of data migration, validation techniques, governance, and security. - Strong problem-solving skills and ability to work independently and in a team. - Excellent communication skills to collaborate with a global team in a high-performing environment.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing and building scalable and efficient data warehouses to support analytics and reporting needs. Additionally, you will develop and optimize ETL pipelines by writing complex SQL queries and automating data pipelines using tools like Apache Airflow. Your role will also involve query optimization, performance tuning, and database management with MySQL, PostgreSQL, and Spark for structured and semi-structured data. Ensuring data quality and governance will be a key part of your responsibilities, where you will validate, monitor, and enforce practices to maintain data accuracy, consistency, and completeness. You will implement data governance best practices, define data standards, access controls, and policies to uphold a well-governed data ecosystem. Data modeling, ETL best practices, BI dashboarding, and proposing/implementing solutions to improve existing systems will also be part of your day-to-day tasks. Collaboration and problem-solving are essential in this role as you will work independently, collaborate with cross-functional teams, and proactively troubleshoot data challenges. Experience with dbt for data transformations is considered a bonus for this position. To qualify for this role, you should have 5-7 years of experience in the data domain, with expertise in data engineering and BI. Strong SQL skills, hands-on experience with data warehouse concepts, ETL best practices, and proficiency in MySQL, PostgreSQL, and Spark are required. Experience with Apache Airflow, data modeling techniques, BI tools like Power BI, Tableau, Apache Superset, data quality frameworks, and governance policies are also essential. The ability to work independently, identify problems, and propose effective solutions is crucial. If you are looking to join a dynamic team at Zenda and have the required experience and skills, we encourage you to apply for this position.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
The Digital Success Engineering team is looking for an experienced Marketing Cloud Engineer (SMTS) to join the Customer Engagement Engineering team. In this role, you will be responsible for providing product support to customers and business users by leveraging your strong Salesforce Marketing Cloud platform knowledge, technical expertise, and exceptional business-facing skills. You will work closely with various teams to understand, troubleshoot, and coordinate operational issues within the Customer Engagement Ecosystem. Your responsibilities will include conducting technical requirements gathering, designing and implementing robust solutions for Salesforce Marketing Cloud projects, and ensuring seamless integration with internal and external systems. You will also be expected to triage and troubleshoot issues, demonstrate analytical and problem-solving expertise, and maintain technical and domain expertise in your assigned areas. To succeed in this role, you must have a Bachelor's degree in Computer Science, Software Engineering, or a related field, along with at least 8 years of hands-on experience in Salesforce Marketing Cloud and other Salesforce Core products. You should be a self-starter, able to work under pressure, and possess advanced knowledge in systems integrations, APIs, marketing compliance, and security protocols. Proficiency in various technical tools and languages such as AMPScript, HTML, CSS, JavaScript, SQL, Python, and Rest API is required. Additionally, you must have excellent communication skills to effectively collaborate with cross-functional teams, provide thought leadership, and drive successful digital program execution. Your project management skills should be top-notch, enabling you to organize, prioritize, and simplify engineering work across different technical domains. If you are a problem solver with a passion for technology, possess exceptional communication skills, and thrive in a fast-paced environment, we encourage you to apply for this role and be a key player in our Digital Success Engineering team.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will be responsible for designing and building data warehouses to support analytics and reporting needs. This includes architecting scalable and efficient data warehouses. Additionally, you will be developing and optimizing ETL pipelines by writing complex SQL queries and utilizing tools like Apache Airflow for automation. Query optimization and performance tuning will be a key aspect of your role, where you will focus on writing efficient SQL queries for both ETL jobs and dashboards in BI tools. Database management is another crucial responsibility, involving working with MySQL, PostgreSQL, and Spark to manage structured and semi-structured data. You will also ensure data quality and governance by validating, monitoring, and implementing governance practices to maintain data accuracy, consistency, and completeness. Implementing data governance best practices, defining data standards, access controls, and policies will be essential to maintain a well-governed data ecosystem. Your role will also include focusing on data modeling and ETL best practices to ensure robust data modeling and the application of best practices for ETL development. Working with BI tools such as Power BI, Tableau, and Apache Superset to create insightful dashboards and reports will also be part of your responsibilities. You will identify and propose improvements to existing systems and take ownership of designing and developing new data solutions. Collaboration and problem-solving are integral to this role, as you will work independently, collaborate with cross-functional teams, and proactively troubleshoot data challenges. Experience with dbt for data transformations is a bonus. Requirements for this role include 5-7 years of experience in the data domain encompassing data engineering and BI. Strong SQL skills with expertise in writing efficient and complex queries are essential. Hands-on experience with data warehouse concepts and ETL best practices, proficiency in MySQL, PostgreSQL, and Spark, and experience using Apache Airflow for workflow orchestration are also required. A strong understanding of data modeling techniques for analytical workloads, experience with Power BI, Tableau, and Apache Superset for reporting and dashboarding, familiarity with data quality frameworks, data validation techniques, and governance policies are prerequisites. The ability to work independently, identify problems, and propose effective solutions is crucial. Experience with dbt for data transformations is a bonus. This job opportunity was posted by Meenal Sharma from Zenda.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Senior Software Engineer-AI/ML Backend Developer Experience: 4-6 years Category: Software Development/ Engineering Location: Bangalore/Hyderabad/Chennai/Pune/Mumbai Shift Timing: General Shift Position ID: J0725-0150 Employment Type: Full Time Education Qualification: Bachelor's degree in computer science or related field or higher with minimum 4 years of relevant experience. We are seeking an experienced AI/ML Backend Developer to join our dynamic technology team. The ideal candidate will have a strong background in developing and deploying machine learning models, implementing AI algorithms, and managing backend systems and integrations. You will play a key role in shaping the future of our technology by integrating cutting-edge AI/ML techniques into scalable backend solutions. Your future duties and responsibilities Develop, optimize, and maintain backend services for AI/ML applications. Implement and deploy machine learning models to production environments. Collaborate closely with data scientists and frontend engineers to ensure seamless integration of backend APIs and services. Monitor and improve the performance, reliability, and scalability of existing AI/ML services. Design and implement robust data pipelines and data processing workflows. Identify and solve performance bottlenecks and optimize AI/ML algorithms for production. Stay current with emerging AI/ML technologies and frameworks to recommend and implement improvements. Required qualifications to be successful in this role Must-have Skills: - Python, TensorFlow, PyTorch, scikit-learn - Machine learning frameworks: TensorFlow, PyTorch, scikit-learn - Backend development frameworks: Flask, Django, FastAPI - Cloud technologies: AWS, Azure, Google Cloud Platform (GCP) - Containerization and orchestration: Docker, Kubernetes - Data management and pipeline tools: Apache Kafka, Apache Airflow, Spark - Database technologies: SQL databases (PostgreSQL, MySQL), NoSQL databases (MongoDB, Cassandra) - Vector Databases: Pinecone, Milvus, Weaviate - Version Control: Git - Continuous Integration/Continuous Deployment (CI/CD) pipelines: Jenkins, GitHub Actions, GitLab CI/CD Minimum of 4 years of experience developing backend systems, specifically in AI/ML contexts. Proven experience in deploying machine learning models and AI-driven applications in production. Solid understanding of machine learning concepts, algorithms, and deep learning techniques. Proficiency in writing efficient, maintainable, and scalable backend code. Experience working with cloud platforms (AWS, Azure, Google Cloud). Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Good-to-have Skills: - Java (preferred), Scala (optional) Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because You are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team, one of the largest IT and business consulting services firms in the world.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
Whether you're at the start of your career or looking to discover your next adventure, your story begins here. At Citi, you'll have the opportunity to expand your skills and make a difference at one of the world's most global banks. We're fully committed to supporting your growth and development from the start with extensive on-the-job training and exposure to senior leaders, as well as more traditional learning. You'll also have the chance to give back and make a positive impact where we live and work through volunteerism. The Product Developer is a strategic professional who stays abreast of developments within their field and contributes to directional strategy by considering their application in their job and the business. Recognized as a technical authority for an area within the business, this role requires basic commercial awareness. Developed communication and diplomacy skills are necessary to guide, influence, and convince others, particularly colleagues in other areas and occasional external customers. The impact of the work is significant on the area through complex deliverables, providing advice and counsel related to the technology or operations of the business. The work impacts an entire area, which eventually affects the overall performance and effectiveness of the sub-function/job family. In this role, you're expected to: - Develop reporting and analytical solutions using various technologies like Python, relational and non-relational databases, Business Intelligence tools, and code orchestrations - Identify solutions ranging across data analytics, reporting, CRM, reference data, Workflows, and trade processing - Design compelling dashboards and Reports using business intelligence tools like Qlikview, Tableau, Pixel-perfect, etc. - Perform data investigations with a high degree of accuracy under tight timelines - Develop plans, prioritize, coordinate design and delivery of products or features to product release, and serve as a product ambassador within the user community - Mentor junior colleagues on technical topics relating to data analytics and software development and conduct code reviews - Follow market, industry, and client trends to own field and adapt them for application to Citi's products and solutions platforms - Work in close coordination with Technology, Business Managers, and other stakeholders to fulfill the delivery objectives - Partner with senior team members and leaders and a widely distributed global user community to define and implement solutions - Appropriately assess risk when making business decisions, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency As a successful candidate, you'd ideally have the following skills and exposure: - 8-12 years of experience using tools for statistical modeling of large data sets and proficient knowledge of data modeling and databases, such as Microsoft SQL Server, Oracle, and Impala - Advanced knowledge of analytical and business intelligence tools including Tableau desktop, Tableau Prep, Tabpy, Access - Familiarity with product development methodologies - Proficient knowledge of programming languages and frameworks such as Python, Visual Basic, and/or R, Apache Airflow, Streamlit and/or Flask, Starburst - Well versed with code versioning tools like github, bitbucket, etc. - Ability to create business analysis, troubleshoot data quality issues, and conduct exploratory and descriptive analysis of business datasets - Ability to structure and break down problems, develop solutions, and drive results - Project Management skills with experience leading large technological initiatives Education: - Bachelor's/University degree, Master's degree preferred Take the next step in your career, apply for this role at Citi today.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Developer contracted by Luxoft for supporting customer initiatives, your main task will involve developing solutions based on client requirements within the Telecom/network work environment. You will be responsible for utilizing technologies such as Databricks and Azure, Apache Spark, Python, SQL, and Apache Airflow to create and manage Databricks clusters for ETL processes. Integration with ADLS, Blob Storage, and efficient data ingestion from various sources including on-premises databases, cloud storage, APIs, and streaming data will also be part of your role. Moreover, you will work on handling secrets using Azure Key Vault, interacting with APIs, and gaining hands-on experience with Kafka/Azure EventHub streaming. Your expertise in data bricks delta APIs, UC catalog, and version control tools like Github will be crucial. Additionally, you will be involved in data analytics, supporting ML frameworks, and integrating with Databricks for model training. Proficiency in Python, Apache Airflow, Microsoft Azure, Databricks, SQL, ADLS, Blob storage, Kafka/Azure EventHub, and various other related skills is a must. The ideal candidate should hold a Bachelor's degree in Computer Science or a related field and possess at least 7 years of experience in development. Problem-solving skills, effective communication abilities, teamwork, and a commitment to continuous learning are essential traits for this role. Desirable skills include exposure to Snowflake, PostGre, Redis, GenAI, and a good understanding of RBAC. Proficiency in English at C2 level is required for this Senior-level position based in Bengaluru, India. This opportunity falls under the Big Data Development category within Cross Industry Solutions and is expected to be effective from 06/05/2025.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You are a highly skilled and experienced Senior Backend Developer with a focus on Python and backend development. In this role, you will be responsible for designing, developing, and maintaining backend applications using Python. Collaborating with cross-functional teams, you will implement RESTful APIs and web services to ensure high-performance and scalable backend systems. Your key responsibilities will include optimizing database performance, working with relational databases such as MySQL and PostgreSQL, and GraphDB like Neo4j. You will also develop and manage orchestration workflows using tools like Apache Airflow, as well as implementing and maintaining CI/CD pipelines for smooth deployments. Collaboration with DevOps teams for infrastructure management will be essential, along with maintaining high-quality documentation and following version control practices. To excel in this role, you must have a minimum of 5-8 years of backend development experience with Python. It would be advantageous to have experience with backend frameworks like Node.js/Typescript and a strong understanding of relational databases with a focus on query optimization. Hands-on experience with GraphDB and familiarity with RESTful APIs, web service design principles, version control tools like Git, CI/CD pipelines, and DevOps practices are also required. Your problem-solving and analytical skills will be put to the test in this role, along with your excellent communication and collaboration abilities for working effectively with cross-functional teams. Adaptability to new technologies and a fast-paced work environment is crucial, and a Bachelor's degree in Computer Science, Engineering, or a related field is preferred. Familiarity with modern frameworks and libraries in Python or Node.js will be beneficial for success in this position. If you believe you are a perfect fit for this role, please send your CV, references, and cover letter to career@e2eresearch.com.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Zeta Global is looking for a visionary backend developer to join the Data Cloud team and lead the evolution with Generative AI. By leveraging this cutting-edge technology, you will be responsible for developing next-generation data products that provide innovative solutions to client challenges. This role presents an exciting opportunity to immerse yourself in the marketing tech landscape, work on advanced AI projects, and pioneer their application in marketing. In this role, you will be expected to fulfill two main responsibilities. Firstly, as a Backend Developer, you will conduct data analysis and generate outputs for Gen AI tasks while also supporting standard data analysis functions. Secondly, as a Gen AI expert, you will be tasked with understanding and translating business requirements into Gen AI-powered outputs independently. Key Responsibilities: - Analyze data from various sources to generate insights for Gen AI and non-Gen AI tasks. - Collaborate effectively with cross-functional teams to ensure alignment and understanding. - Utilize AI assistants to solve business problems and enhance user experience. - Support product development teams in creating APIs that interact with Gen AI backend data. - Create data flow diagrams and materials for coordination with devops teams. - Manage deployment of APIs in relevant spaces like Kubernetes. - Provide technical guidance on Gen AI best practices to UI development and data analyst teams. - Stay updated on advancements in Gen AI and suggest implementation ideas to maximize its potential. Requirements: - Proven experience as a data analyst with a track record of delivering impactful insights. - Proficiency in Gen AI platforms such as OpenAI, Gemini, and experience in creating and optimizing AI models. - Familiarity with API deployment, data pipelines, and workflow automation. - Strong critical thinking skills and a proactive business mindset. - Excellent communication and collaboration abilities. Technical Skills: - Python - SQL - AWS Services (Lambda, EKS) - Apache Airflow - CICD (Serverless Framework) - Git - Jira / Trello Preferred Qualifications: - Understanding of marketing/advertising industry. - Previous involvement in at least one Gen AI project. - Strong programming skills in Python or similar languages. - Background in DevOps or closely related experience. - Proficiency in data management. What We Offer: - Opportunity to work on AI-powered data analysis and drive real impact. - Agile product development timelines for immediate visibility of your work. - Supportive work environment for continuous learning and growth. - Competitive salary and benefits package. Zeta Global is a leading marketing technology company known for its innovative solutions and industry leadership. Established in 2007, the company combines a vast proprietary data set with Artificial Intelligence to personalize experiences, understand consumer intent, and fuel business growth. Zeta Global's technology, powered by the Zeta Marketing Platform, enables end-to-end marketing programs for renowned brands across digital channels. With expertise in Email, Display, Social, Search, and Mobile marketing, Zeta delivers scalable and sustainable acquisition and engagement programs that drive results.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
At Lilly, the focus is on uniting caring with discovery to enhance the lives of people worldwide. As a global healthcare leader headquartered in Indianapolis, Indiana, we are committed to developing and providing life-changing medicines, advancing disease management, and contributing to communities through philanthropy and volunteerism. Our dedicated team of 35,000 employees collaborates to prioritize people and strive towards making a positive impact globally. The Enterprise Data organization at Lilly has pioneered an integrated data and analytics platform designed to facilitate the efficient processing and analysis of data sets across various environments. As part of this team, you will play a crucial role in managing, monitoring, and optimizing the flow of high-quality data to support data sharing and analytics initiatives. Your responsibilities will include monitoring data pipelines to ensure smooth data flow, managing incidents to maintain data integrity, communicating effectively with stakeholders regarding data issues, conducting root cause analysis to enhance processes, optimizing data pipeline performance, and implementing measures to ensure data accuracy and reliability. Additionally, you will be involved in cloud cost optimization, data lifecycle management, security compliance, automation, documentation, and collaboration with various stakeholders to improve pipeline performance. To excel in this role, you should possess a Bachelor's Degree in Information Technology or a related field, along with at least 5 years of work experience in Information Technology. Strong analytical, collaboration, and communication skills are essential, along with the ability to adapt to new technologies and methodologies. Proficiency in ETL processes, SQL, AWS services, CI/CD, Apache Airflow, and ITIL practices is required. Certification in AWS and experience with agile frameworks are preferred. If you are passionate about leveraging technology to drive innovation in the pharmaceutical industry and are committed to ensuring data integrity and security, we invite you to join our team in Hyderabad, India. Embrace the opportunity to contribute to Lilly's mission of making life better for individuals worldwide.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Analyst with 1+ years of experience in AdTech, you will be an integral part of our analytics team. Your primary role will involve analyzing large-scale advertising and digital media datasets to support business decisions. You will work with various AdTech data such as ads.txt, programmatic delivery, campaign performance, and revenue metrics. Your responsibilities will include designing, developing, and maintaining scalable data pipelines using GCP-native tools like Cloud Functions, Dataflow, and Composer. You will be required to write and optimize complex SQL queries in BigQuery for data extraction and transformation. Additionally, you will build and maintain dashboards and reports in Looker Studio to visualize key performance indicators (KPIs) and campaign performance. Collaboration with cross-functional teams including engineering, operations, product, and client teams will be crucial as you gather requirements and deliver analytics solutions. Monitoring data integrity, identifying anomalies, and working on data quality improvements will also be a part of your role. To be successful in this role, you should have a minimum of 1 year of experience in a data analytics or business intelligence role. Hands-on experience with AdTech datasets, strong proficiency in SQL (especially with Google BigQuery), and experience with building data pipelines using Google Cloud Platform (GCP) tools are essential. Proficiency in Looker Studio, problem-solving skills, attention to detail, and excellent communication skills are also required. Preferred qualifications include experience with additional visualization tools such as Tableau, Power BI, or Looker (BI), exposure to data orchestration tools like Apache Airflow (via Cloud Composer), familiarity with Python for scripting or automation, and understanding of cloud data architecture and AdTech integrations (e.g., DV360, Ad Manager, Google Ads).,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, specifically PySpark or Spark with Scala. Your role will involve building data ingestion and transformation frameworks for various structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders is essential to understand requirements and deliver reliable data solutions. Working with large volumes of data, you will ensure quality, integrity, and consistency while optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines is a critical aspect of this role. Monitoring and troubleshooting data issues in production, along with performing root cause analysis, will be part of your responsibilities. Additionally, documenting technical processes, system designs, and operational procedures will be necessary. The ideal candidate for this position should have at least 3+ years of experience as a Data Engineer or in a similar role. Hands-on experience with PySpark or Spark using Scala is required, along with a strong knowledge of SQL for data querying and transformation. Experience working with any cloud platform (AWS, Azure, or GCP) and a solid understanding of data warehousing concepts and big data architecture are essential. Familiarity with version control systems like Git is also a must-have skill. In addition to the must-have skills, it would be beneficial to have experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. Knowledge of Delta Lake, HDFS, or Kafka, familiarity with containerization tools (Docker/Kubernetes), exposure to CI/CD practices and DevOps principles, and an understanding of data governance, security, and compliance standards are considered good-to-have skills. If you meet the above requirements and are ready to join immediately, please share your details via email to nitin.patil@ust.com for quick processing. Act fast for immediate attention!,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Imagine what you could do here. At Apple, new ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Every single day, people do amazing things at Apple. Do you want to impact the future of Manufacturing here at Apple through cutting edge ML techniques This position involves a wide variety of skills, innovation, and is a rare opportunity to be working on ground breaking, new applications of machine-learning, research and implementation. Ultimately, your work would have a huge impact on billions of users across the globe. You can help inspire change, by using your skills to influence globally recognized products" supply chain. The goal of Apple's Manufacturing & Operations team is to take a vision of a product and turn it into a reality. Through the use of statistics, the scientific process, and machine learning, the team recommends and implements solutions to the most challenging problems. Were looking for experienced machine learning professionals to help us revolutionize how we manufacture Apples amazing products. Put your experience to work in this highly visible role. Operations Advanced Analytics team is looking for creative and motivated hands-on individual contributors who thrive in a dynamic environment and enjoy working with multi-functional teams. As a member of our team, you will work on applied machine-learning algorithms to seek problems that focus on topics such as classification, regression, clustering, optimizations and other related algorithms to impact and optimize Apples supply chain and manufacturing processes. As a part of this role, you would work with the team to build end-to-end machine learning systems and modules, and deploy the models to our factories. You'll be collaborating with Software Engineers, Machine Learning Engineers, Operations, and Hardware Engineering teams across the company. Minimum Qualifications - 3+ years experience in machine learning algorithms, software engineering, and data mining models with an emphasis on large language models (LLM) or large multimodal models (LMM). - Masters in Machine Learning, Artificial intelligence, Computer Science, Statistics, Operations Research, Physics, Mechanical Engineering, Electrical Engineering, or related field. Preferred Qualifications - Proven experience in LLM and LMM development, fine-tuning, and application building. Experience with agents and agentic workflows is a major plus. - Experience with modern LLM serving and inference frameworks, including vLLM for efficient model inference and serving. - Hands-on experience with LangChain and LlamaIndex, enabling RAG applications and LLM orchestration. - Strong software development skills with proficiency in Python. Experienced user of ML and data science libraries such as PyTorch, TensorFlow, Hugging Face Transformers, and scikit-learn. - Familiarity with distributed computing, cloud infrastructure, and orchestration tools, such as Kubernetes, Apache Airflow (DAG), Docker, Conductor, Ray for LLM training and inference at scale is a plus. - Deep understanding of transformer-based architectures (e.g., BERT, GPT, LLaMA) and their optimization for low-latency inference. - Ability to meaningfully present results of analyses in a clear and impactful manner, breaking down complex ML/LLM concepts for non-technical audiences. - Experience applying ML techniques in manufacturing, testing, or hardware optimization is a major plus. - Proven experience in leading and mentoring teams is a plus. Submit CV,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
Candidates who are ready to join immediately can share their details via email for quick processing to nitin.patil@ust.com. Act fast for immediate attention! With over 5 years of experience, the ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines using Spark, either PySpark or Spark with Scala. They will also be tasked with building data ingestion and transformation frameworks for structured and unstructured data sources. Collaboration with data analysts, data scientists, and business stakeholders to understand requirements and deliver reliable data solutions is a key aspect of the role. The candidate will work with large volumes of data to ensure quality, integrity, and consistency, optimizing data workflows for performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. Implementation of data quality checks and automation for ETL/ELT pipelines, as well as monitoring and troubleshooting data issues in production, are also part of the responsibilities. Documentation of technical processes, system designs, and operational procedures will be essential. Must-Have Skills: - At least 3 years of experience as a Data Engineer or in a similar role. - Hands-on experience with PySpark or Spark using Scala. - Strong knowledge of SQL for data querying and transformation. - Experience working with any cloud platform (AWS, Azure, or GCP). - Solid understanding of data warehousing concepts and big data architecture. - Experience with version control systems like Git. Good-to-Have Skills: - Experience with data orchestration tools like Apache Airflow, Databricks Workflows, or similar. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools (Docker/Kubernetes). - Exposure to CI/CD practices and DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |