Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 9.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Req ID: 327315 We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title / RoleGCP & GKE - Sr Cloud Engineer Primary SkillCloud-Infrastructure-Google Cloud Platform Minimum work experience4+ yrs Total Experience4+ Years Mandatory Skills: Technical Qualification/ Knowledge Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Working knowledge on GCE, GAE, GKE and GCS Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. Creating Databases in GCP and in VM"™s Knowledge of data analyst tool (big query). Knowledge of cost analysis and cost optimization. Knowledge of Git & GitHub. Knowledge on Terraform and Jenkins. Monitoring the VM and Applications using Stack driver. Working knowledge on VPN and Interconnect setup. Hands on experience in setting up HA environment. Hands on experience in Creating VM instances in Google cloud Platform. Hands on experience in Cloud storage and retention policies in storage. Managing Users on Google IAM Service and providing them appropriate permissions. GKE Install Tools - Set up Kubernetes tools Administer a Cluster Configure Pods and Containers Perform common configuration tasks for Pods and containers. Monitoring, Logging, and Debugging Inject Data Into Applications Specify configuration and other data for the Pods that run your workload. Run Applications Run and manage both stateless and stateful applications. Run Jobs Run Jobs using parallel processing. Access Applications in a Cluster Extend Kubernetes Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. Manage Cluster Daemons Perform common tasks for managing a DaemonSet, such as performing a rolling update. Extend kubectl with plugins Extend kubectl by creating and installing kubectl plugins. Manage HugePages Configure and manage huge pages as a schedulable resource in a cluster. Schedule GPUs Configure and schedule GPUs for use as a resource by nodes in a cluster. CertificationGCP Engineer & GKE Academic Qualification: B. Tech or equivalent or MCA Process/ Quality Knowledge Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Chennai
Work from Office
Req ID: 327318 We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Job Title / RoleGCP & GKE - Sr Cloud Engineer Primary SkillCloud-Infrastructure-Google Cloud Platform Minimum work experience4+ yrs Total Experience4+ Years Mandatory Skills: Technical Qualification/ Knowledge Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Working knowledge on GCE, GAE, GKE and GCS Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. Creating Databases in GCP and in VM"™s Knowledge of data analyst tool (big query). Knowledge of cost analysis and cost optimization. Knowledge of Git & GitHub. Knowledge on Terraform and Jenkins. Monitoring the VM and Applications using Stack driver. Working knowledge on VPN and Interconnect setup. Hands on experience in setting up HA environment. Hands on experience in Creating VM instances in Google cloud Platform. Hands on experience in Cloud storage and retention policies in storage. Managing Users on Google IAM Service and providing them appropriate permissions. GKE Install Tools - Set up Kubernetes tools Administer a Cluster Configure Pods and Containers Perform common configuration tasks for Pods and containers. Monitoring, Logging, and Debugging Inject Data Into Applications Specify configuration and other data for the Pods that run your workload. Run Applications Run and manage both stateless and stateful applications. Run Jobs Run Jobs using parallel processing. Access Applications in a Cluster Extend Kubernetes Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. Manage Cluster Daemons Perform common tasks for managing a DaemonSet, such as performing a rolling update. Extend kubectl with plugins Extend kubectl by creating and installing kubectl plugins. Manage HugePages Configure and manage huge pages as a schedulable resource in a cluster. Schedule GPUs Configure and schedule GPUs for use as a resource by nodes in a cluster. CertificationGCP Engineer & GKE Academic Qualification: B. Tech or equivalent or MCA Process/ Quality Knowledge Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation
Posted 1 month ago
4.0 - 9.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Req ID: 327319 We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). Job Title / RoleGCP & GKE - Sr Cloud Engineer Primary SkillCloud-Infrastructure-Google Cloud Platform Minimum work experience4+ yrs Total Experience4+ Years Mandatory Skills: Technical Qualification/ Knowledge Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Working knowledge on GCE, GAE, GKE and GCS Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. Creating Databases in GCP and in VM"™s Knowledge of data analyst tool (big query). Knowledge of cost analysis and cost optimization. Knowledge of Git & GitHub. Knowledge on Terraform and Jenkins. Monitoring the VM and Applications using Stack driver. Working knowledge on VPN and Interconnect setup. Hands on experience in setting up HA environment. Hands on experience in Creating VM instances in Google cloud Platform. Hands on experience in Cloud storage and retention policies in storage. Managing Users on Google IAM Service and providing them appropriate permissions. GKE Install Tools - Set up Kubernetes tools Administer a Cluster Configure Pods and Containers Perform common configuration tasks for Pods and containers. Monitoring, Logging, and Debugging Inject Data Into Applications Specify configuration and other data for the Pods that run your workload. Run Applications Run and manage both stateless and stateful applications. Run Jobs Run Jobs using parallel processing. Access Applications in a Cluster Extend Kubernetes Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. Manage Cluster Daemons Perform common tasks for managing a DaemonSet, such as performing a rolling update. Extend kubectl with plugins Extend kubectl by creating and installing kubectl plugins. Manage HugePages Configure and manage huge pages as a schedulable resource in a cluster. Schedule GPUs Configure and schedule GPUs for use as a resource by nodes in a cluster. CertificationGCP Engineer & GKE Academic Qualification: B. Tech or equivalent or MCA Process/ Quality Knowledge Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation
Posted 1 month ago
1.0 - 6.0 years
5 - 9 Lacs
Noida, Chennai, Bengaluru
Work from Office
Req ID: 328283 We are currently seeking a Jr Cloud Engineer - GCP to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Title / RoleJr Cloud Engineer - GCP : Primary SkillCloud-Infrastructure-Google Cloud Platform Minimum work experience1+ yrs Total Experience1+ Years Mandatory Skills: Technical Qualification/ Knowledge: Working knowledge on GCE, GAE, GKE and GCS Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. Creating Databases in GCP and in VM"™s Knowledge of data analyst tool (big query). Knowledge of cost analysis and cost optimization. Knowledge of Git & GitHub. Knowledge on Terraform and Jenkins. Monitoring the VM and Applications using Stack driver. Working knowledge on VPN and Interconnect setup. Hands on experience in setting up HA environment. Hands on experience in Creating VM instances in Google cloud Platform. Hands on experience in Cloud storage and retention policies in storage. Managing Users on Google IAM Service and providing them appropriate permissions . GKE - Designing, implementing, managing, and deploying cloud-native applications in a Kubernetes environment. GKE - Automation, troubleshooting issues, and mentoring the team members. GKE - Understand security, efficiency, scalability of core services and capabilities. CertificationGCP Engineer & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery. Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation Location - Bengaluru,Chennai,Noida,Pune
Posted 1 month ago
4.0 - 9.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Req ID: 327298 We are currently seeking a GCP Solution Architect to join our team in Bangalore, Karntaka (IN-KA), India (IN). Primary SkillCloud-Infrastructure-Google Cloud Platform Minimum work experience4+ yrs Total Experience4+ Years Mandatory Skills: Technical Qualification/ Knowledge Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Working knowledge on GCE, GAE, GKE and GCS Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. Creating Databases in GCP and in VM"™s Knowledge of data analyst tool (big query). Knowledge of cost analysis and cost optimization. Knowledge of Git & GitHub. Knowledge on Terraform and Jenkins. Monitoring the VM and Applications using Stack driver. Working knowledge on VPN and Interconnect setup. Hands on experience in setting up HA environment. Hands on experience in Creating VM instances in Google cloud Platform. Hands on experience in Cloud storage and retention policies in storage. Managing Users on Google IAM Service and providing them appropriate permissions. GKE Install Tools - Set up Kubernetes tools Administer a Cluster Configure Pods and Containers Perform common configuration tasks for Pods and containers. Monitoring, Logging, and Debugging Inject Data Into Applications Specify configuration and other data for the Pods that run your workload. Run Applications Run and manage both stateless and stateful applications. Run Jobs Run Jobs using parallel processing. Access Applications in a Cluster Extend Kubernetes Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. Manage Cluster Daemons Perform common tasks for managing a DaemonSet, such as performing a rolling update. Extend kubectl with plugins Extend kubectl by creating and installing kubectl plugins. Manage HugePages Configure and manage huge pages as a schedulable resource in a cluster. Schedule GPUs Configure and schedule GPUs for use as a resource by nodes in a cluster. CertificationGCP Engineer & GKE Academic Qualification: B. Tech or equivalent or MCA Process/ Quality Knowledge Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation
Posted 1 month ago
5.0 - 7.0 years
13 - 17 Lacs
Hyderabad
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 1 month ago
5.0 - 10.0 years
5 - 9 Lacs
Pune
Work from Office
The IBM Storage Protect Support (Spectrum Protect or TSM erstwhile) team is supporting complex integrated storage products end to end, including Spectrum Protect, Spectrum Protect Plus, Copy Data Management. This position involves working with our IBM customers remotely, which are some of the world's top research, automotive, banks, health care and technology providers. The candidates must be able to assist with operating systems (AIX,Linux, Unix, Windows), SAN, network protocols, clouds and storage devices. They will work in a virtual environment working with colleagues around the globe and will be exposed to many different types of technologies. Responsibilitiesmust include but not limited to Provide remote troubleshooting and analysis assistance for usage and configuration questions Review diagnostic information to assist in isolation of a problem cause (which could include assistance interpreting traces and dumps) Identify known defects and fixes to resolve problems Develops best practice articles and support utilities to improve support quality and productivity Respond to escalated customer calls, complaints, and queries The job will require flexible schedule to ensure 24x7 support operations and weekend on-call coverage, including extending/taking shift to cover North America working hours. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Excellent communication skills - both verbal and written Provide remote troubleshooting and analysis assistance for usage and configuration questions Preferred Professional and Technical Expertise: At least 5-10 years of in-depth experience with Spectrum Protect (Storage Protect) or its competition products in data protection domain Working knowledge on RedHat, Openshift or Ansible administration will be preferred. Good in networking and troubleshooting. Cloud Certification will be added advantage. Knowledge about Object Storage and Cloud Storage will be preferred. Preferred technical and professional experience Hiring manager and Recruiter should collaborate to create the relevant verbiage.
Posted 1 month ago
6.0 - 9.0 years
7 - 14 Lacs
Hyderabad
Work from Office
Role Overview: We are seeking a talented and forward-thinking Data Engineer for one of the large financial services GCC based in Hyderabad with responsibilities that include designing and constructing data pipelines, integrating data from multiple sources, developing scalable data solutions, optimizing data workflows, collaborating with cross-functional teams, implementing data governance practices, and ensuring data security and compliance. Technical Requirements: 1. Proficiency in ETL, Batch, and Streaming Process 2. Experience with BigQuery, Cloud Storage, and CloudSQL 3. Strong programming skills in Python, SQL, and Apache Beam for data processing 4. Understanding of data modeling and schema design for analytics 5. Knowledge of data governance, security, and compliance in GCP 6. Familiarity with machine learning workflows and integration with GCP ML tools 7. Ability to optimize performance within data pipelines Functional Requirements: 1. Ability to collaborate with Data Operations, Software Engineers, Data Scientists, and Business SMEs to develop Data Product Features 2. Experience in leading and mentoring peers within an existing development team 3. Strong communication skills to craft and communicate robust solutions 4. Proficient in working with Engineering Leads, Enterprise and Data Architects, and Business Architects to build appropriate data foundations 5. Willingness to work on contemporary data architecture in Public and Private Cloud environments T his role offers a compelling opportunity for a seasoned Data Engineering to drive transformative cloud initiatives within the financial sector, leveraging unparalleled experience and expertise to deliver innovative cloud solutions that align with business imperatives and regulatory requirements . Qualification Engineering Grad / Postgraduate CRITERIA 1. Proficient in ETL, Python, and Apache Beam for data processing efficiency. 2. Demonstrated expertise in BigQuery, Cloud Storage, and CloudSQL utilization. 3. Strong collaboration skills with cross-functional teams for data product development. 4. Comprehensive knowledge of data governance, security, and compliance in GCP. 5. Experienced in optimizing performance within data pipelines for efficiency. 6. Relevant Experience: 6-9 years Connect at 9993809253
Posted 1 month ago
8.0 - 12.0 years
20 - 30 Lacs
Hyderabad
Work from Office
The ideal candidate will have extensive experience with Google Cloud Platform's data services, building scalable data pipelines, and implementing modern data architecture solutions. Key Responsibilities Design and implement data lake solutions using GCP Storage and Data Transfer Service Develop and maintain ETL/ELT pipelines for data processing and transformation Orchestrate complex data workflows using Cloud Composer (managed Apache Airflow) Build and optimize BigQuery data models and implement data governance practices Configure and maintain Dataplex for unified data management across our organization Implement monitoring solutions using Cloud Monitoring to ensure data pipeline reliability Create and maintain data visualization solutions using Looker for business stakeholders Collaborate with data scientists and analysts to deliver high-quality data products Required Skills & Experience 8+ years of hands-on experience with GCP data services including: Cloud Storage and Storage Transfer Service for data lake implementation BigQuery for data warehousing and analytics Cloud Composer for workflow orchestration Dataplex for data management and governance Cloud Monitoring for observability and alerting Strong experience with ETL/ELT processes and data pipeline development Proficiency in SQL and at least one programming language (Python preferred) Experience with Looker or similar BI/visualization tools Knowledge of data modeling and dimensional design principles Experience implementing data quality monitoring and validation Preferred Qualifications Google Cloud Professional Data Engineer certification Experience with streaming data processing using Dataflow or Pub/Sub Knowledge of data mesh or data fabric architectures Experience with dbt or similar transformation tools Familiarity with CI/CD practices for data pipelinesRole & responsibilities Preferred candidate profile
Posted 1 month ago
9.0 - 12.0 years
0 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Excellent knowledge on GCP Cloud Run, Cloud Task, Cloud Pub Sub & Cloud Storage. Handson with Python or NodeJs coding for application development Understanding of GCP service choices based on SLAs, scalability, compliance, and integration needs Proficiency in understanding trade-offs between choosing either of the services (e.g., why Cloud Run with 4 vCPUs over GKE with GPU). Deep understanding of concurrency settings, DB pool strategies, scaling Proficiency in implementing resilient, cost-optimized, low latency, enterprise-grade cloud solutions Proficient in suggesting configuration, predictive autoscaling, concurrency, cold start mitigation, failover etc for the different GCP services as per business needs Experience in Micro Services architecture and development. Able to configure and build systems, not just stitch them together Must be able to root cause pipeline latency, scaling issues, errors and downtime Cross-functional leadership during architecture definition, implementation, and rollout.
Posted 1 month ago
2.0 - 7.0 years
3 - 6 Lacs
New Delhi, Agra
Work from Office
The Data Management Associate Photo & Video Content is responsible for the organized handling of all photo and video assets generated by the organization. This includes data collection, systematic backup, content organization, and support for internal teams through efficient retrieval and sharing of visual materials. The role is essential in ensuring that content is we'll-preserved, organized, searchable, and accessible for communication, outreach, and archival purposes. Key Responsibilities Data Collection & Backup Receive and collect photo/video files regularly from communications & av team members. Ensure timely and secure backup of all incoming data. Data Organization & Management Rename files using standardized naming conventions (eg, date, event, animal name). Organize data by project, event, species, or other relevant tags. Maintain a user-friendly folder structure for quick and efficient retrieval. Data Upload & Storage Upload organized content to central servers or cloud storage platforms. Monitor storage capacity and coordinate with IT for expansions when necessary. Manage permissions and ensure secure data access and storage practices. Content Retrieval & Sharing Respond to internal requests for specific visual content quickly and efficiently. Retrieve and share requested files while maintaining a record of what was shared and with whom. Oversee the archiving of older and historical photo/video content. Digitize legacy materials where necessary and integrate them into the archive system. Conduct periodic audits to ensure data integrity and completeness. Coordinate with the Communications team to provide content for campaigns, social media, media outreach, and documentation. Support the team with timely content, especially during field assignments or urgent communication needs. Generate regular reports on storage usage, data volume, and archival updates. Maintain proper documentation of file handling protocols and sharing activity.
Posted 1 month ago
5.0 - 6.0 years
7 - 8 Lacs
Bengaluru
Work from Office
">Data Scientist 2.5-6 Years Bengaluru data science NLP Role -Data /Applied scientist (Search/ Recommendation) Experience - 2.5 yrs to 6 years Location - Bangalore Strong in Python and experience with Jupyter notebooks , Python packages like polars, pandas, numpy , scikit-learn, matplotlib, etc. Must have: Experience with machine learning lifecycle , including data preparation , training , evaluation , and deployment Must have: Hands-on experience with GCP services for ML & data science Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies Must have: Experience with Vector Search and Hybrid Search techniques Must have: Experience with embeddings generation using models like BERT , Sentence Transformers , or custom models Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN , Annoy ) Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) Must have: Understanding of semantic vs lexical search paradigms Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost , LightGBM with LTR support) Should be proficient in SQL and BigQuery for analytics and feature generation Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark Should have experience deploying models and services using Vertex AI , Cloud Run , or Cloud Functions Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch ) and blending with vector-based approaches Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval Good to have: Familiarity with TensorFlow Hub , Hugging Face , or other model repositories Good to have: Experience with prompt engineering , context windowing , and embedding optimization for LLM-based systems Should understand how to build end-to-end ML pipelines for search and ranking applications Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k , recall , nDCG , MRR ) Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI : Vertex AI, Vertex AI Matching Engine, AutoML , AI Platform Storage : BigQuery , Cloud Storage, Firestore Ingestion : Pub/Sub, Cloud Functions, Cloud Run Search : Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute : Cloud Run, Cloud Functions, Vertex Pipelines , Cloud Dataproc (Spark/ PySpark ) CI/CD & IaC : GitLab/GitHub Actions
Posted 1 month ago
5.0 - 10.0 years
35 - 40 Lacs
Pune
Work from Office
: Job Title- Data Engineer (ETL, Big Data, Hadoop, Spark, GCP), AVP Location- Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in Com in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience: More than 6+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark ,SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience in tableau is good to have. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 1 month ago
3.0 - 5.0 years
30 - 35 Lacs
Pune
Work from Office
: Job TitleDevOps Engineer, AVP LocationPune, India Role Description We are seeking a highly skilled and experienced DevOps Engineer to join our team, with a focus on Google Cloud as we migrate and build the financial crime risk platforms on the cloud. The successful candidate will be responsible for designing, implementing, and maintaining our teams infrastructure and workflows on Google Cloud Platforms. This is a unique opportunity to work at the intersection of software development, infrastructure management and to contribute to the growth and success of our team. DevOps Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Banks Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel.You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support." What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design, implement, and maintain our teams infrastructure and workflows on Google Cloud Platform, including GCP services such as Google Kubernetes Engine (GKE), Cloud Storage, Vertex AI, Anthos, Monitoring etc. Design, implement, and maintain our containerization and orchestration strategy using Docker and Kubernetes. Collaborate with development teams to ensure seamless integration of containerized applications into our production environment. Collaborate with software developers to integrate machine learning models and algorithms into our products, using PyTorch, TensorFlow or other machine learning frameworks. Develop and maintain CI/CD pipelines for our products, using tools such as GitHub and GitHub actions. Create and maintain Infrastructure as Code templates using Terraform. Ensure the reliability, scalability, and security of our infrastructure and products, using monitoring and logging tools such as Anthos Service Mesh (ASM), Google Cloud's operations (GCO) etc. Work closely with other teams, such as software development, data science, and product management, to identify and prioritize infrastructure and machine learning requirements. Stay up to date with the latest developments in Google Cloud Platform and machine learning and apply this knowledge to improve our products and processes. Your skills and experience Bachelors degree in computer science, Engineering, or a related field. At least 3 years of experience in a DevOps or SRE role, with a focus on Google Cloud Platform. Strong experience with infrastructure as code tools such as Terraform or Cloud Formation. Experience with containerization technologies such as Docker and container orchestration tools such as Kubernetes. Knowledge of machine learning frameworks such as TensorFlow or PyTorch. Experience with CI/CD pipelines and automated testing. Strong understanding of security and compliance best practices, including GCP security and compliance features. Excellent communication and collaboration skills, with the ability to work closely with cross-functional teams Preferred Qualifications Masters degree in computer science, Engineering, or a related field. Knowledge of cloud-native application development, including serverless computing and event-driven architecture. Experience with cloud cost optimization and resource management. Familiarity with agile software development methodologies and version control systems such as Git How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Chennai
Work from Office
"Position Description: Full Stack Data Engineers will work with Data Scientists and Product Development Python, Dataproc, Airflow PySpark, Cloud Storage, DBT, DataForm, NAS, Pubsub, TERRAFORM, API, Big Query, Data Fusion, GCP, Tekton
Posted 1 month ago
2.0 - 4.0 years
4 - 6 Lacs
Hyderabad
Work from Office
Key Responsibilities:Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services.Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies.CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications.Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. Conduct regular audits to ensure compliance with organizational and regulatory standards. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues.Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance.Required Skills and Qualifications:Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification.Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub.DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker.Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize.5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management.Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite.Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. Soft Skills: Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.
Posted 1 month ago
8.0 - 10.0 years
9 - 13 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 1 month ago
10.0 - 18.0 years
8 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Deep understanding of GCP core services: Compute Engine, Cloud Storage, Virtual Private Cloud (VPC), Cloud Load Balancing Strong experience with automation tools Terraform, Ansible, Deployment Manager Proficiency in Docker, Kubernetes BigQuery Cloud
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Hybrid
Key Responsibilities: 1. Cloud Infrastructure Management: o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP). o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization: o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines: o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD. o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance: o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support: o Work closely with development teams to containerize applications and ensure smooth deployment on GCP. o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization: o Monitor and optimize GCP resource usage to ensure cost efficiency. o Implement strategies to reduce cloud spend without compromising performance. ________________________________________ Required Skills and Qualifications: 1. Certifications: o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise: o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools: o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build. o Experience with containerization tools like Docker. 4. Kubernetes Expertise: o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets. o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting: o Strong scripting skills in Python, Bash, or Go. o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging: o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking: o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. 8. Soft Skills: o Strong problem-solving and troubleshooting skills. o Excellent communication and collaboration abilities. o Ability to work in an agile, fast-paced environment.
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
Hiring for Big Data Engineer- Pune Job Title: Big Data Engineer Experience Level: Mid-Level (4+ Years) Job Summary: We are seeking a highly skilled Big Data Engineer to join our data engineering team. The ideal candidate will have experience in designing, developing, and optimizing large-scale data processing systems. You will work with big data technologies to build scalable data pipelines, support advanced analytics, and enable real-time decision-making across the organization. Key Responsibilities: Design and develop scalable data pipelines to ingest, transform, and store structured and unstructured data. Build and maintain distributed processing systems using tools like Apache Spark, Hadoop, Kafka, and Hive . Work closely with data scientists, analysts, and business users to understand data requirements. Ensure data quality, integrity, and lineage across all stages of the pipeline. Develop ETL/ELT processes and optimize existing workflows for performance and reliability. Integrate data from various sources including APIs, databases, and cloud storage systems. Monitor and troubleshoot data pipeline issues and ensure uptime and reliability. Implement data security and governance best practices. Stay updated with the latest trends and technologies in big data and cloud platforms.
Posted 1 month ago
4.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Understanding of design, configuring infrastructure based on provided design, managing GCP infrastructure using Terraform. Automate the provisioning, configuration, and management of GCP resources, including Compute Engine, Cloud Storage, Cloud SQL, Spanner, Kubernetes Engine (GKE), and serverless offerings like Cloud Functions and Cloud Run. Manage and configure GCP service accounts, IAM roles, and permissions to ensure secure access to resources. Implement and manage load balancers (HTTP(S), TCP/UDP) for high availability and scalability. Develop and maintain CI/CD pipelines using Cloud build, GitHub Actions or similar tools. Monitor and optimize the performance and availability of our GCP infrastructure. Candidates with certification will be preferred. Primary skills Terraform CI/CD Pipeline IAC Docker Kubernetes Secondary skills AWS Azure Github
Posted 1 month ago
1.0 - 3.0 years
2 - 7 Lacs
Bengaluru
Work from Office
Title: Technical Support Engineer ILM Location: [Location of the Job] Job Type: Full time Department: IT Infrastructure/Technical Support Reports to: IT Support Manager/CTO As a Technical Support Engineer, focusing on Information Lifecycle Management (ILM) and related infra. We will be responsible for configuring services on various products and implementing product and technology at the customer premises. In this role, you will provide technical support for our storage, backup, virtualization, and server administration We will work closely with customers and projects the ensure successful deployment, maintenance, and troubleshooting of Products like NetApp storage, VMware, Veeam, other operating system features, and servers. We are involved in troubleshooting at basic, intermediate, and complex levels based on the incident (NetApp, Veeam, VMware, server, operating system) to resolve any issues without any challenges. Maintaining and monitoring the local administration (server, storage, backups, virtualization) in office infrastructure. Knowledge sharing with the team to improve the skill set across the team. Working with OEM and support-related issues to resolve the issue(critical) Key Responsibilities : Site Visits : We travel to the customer's location based on the project and their specific requirements for initial implementation and configuration. Additionally, for a few customers, we provide support during U.S. time zones, depending on the project's needs. POC (proof of concept) : Based on the project requirement we are giving the POC to the end customer about the product, technologies, and features. Technical Support : Provide technical support and troubleshooting for ILM products across NetApp storage systems, Veeam Backup & Replication, VMware virtualization platforms, OS level and servers, and others. System Maintenance : Make sure the availability, of NetApp storage, VMware infrastructure, and Veeam backup environments by performing regular maintenance and monitoring (local infra and when there is customer need) Issue Resolution : Troubleshoot and resolve complex technical issues related to storage, backup, and virtualization. Learning : Learning and focusing on the new technologies (HCI, cloud storage) and increasing the technical skills in troubleshooting the products in depth Documentation : Prepare and maintain technical documentation and knowledge base articles for troubleshooting and steps of implementation steps for our future reference. Creating and maintaining the project and implementation report for project sign-off.
Posted 1 month ago
4.0 - 6.0 years
15 - 19 Lacs
Gurugram
Work from Office
Develop and implement Generative AI / AI solutions on Google Cloud Platform Work with cross-functional teams to design and deliver AI-powered products and services Work on developing, versioning and executing Python code Deploy models as endpoints in Dev Environment Must Have Skills Solid understanding of python Experience with deep learning frameworks such as TensorFlow, PyTorch, or JAX Experience with natural language processing (NLP) and machine learning (ML) Experience on Cloud storage, compute engine, VertexAI, Cloud Function, Pub/Sub, Vertex AI etc Hands on experience with Generative AI support in Vertex, specifically handson experience with Generative AI models like Gemini, vertex Search etc Familiarity with Prompt Design and prompt tuning for Generative AI models Ability to work on Vector Data Stores, custom embeddings and generate insights based on embeddings Exposure and familiarity in developing endpoints and frameworks like Flask or FastAPI Exposure and familiarity in using BigQuery (Basic understanding) Hands-on experience with using LangChain - Chain of Thoughts, tools, simple and sequential chain Strong communication and teamwork skills Good to Have Skills Excellent problem-solving and analytical skills Prior experience with developing conversational apps/chatbots Prior experience of working in Media domain recommendation systems Qualifications and Prior Experience 4-6+ years of experience in AI development Experience with Google Cloud Platform specifically delivering an AI solution on VertexAI platform Experience in developing and deploying AI Solutions Experience with Bert and Transformer models Experience with Agile development Gcp, Gen Ai
Posted 2 months ago
6.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Hybrid
Looking for Storage Engineer who worked on DELL EMC Storage SAN & NAS, AWS NetApp , Synology , HP , Hitachi server operating systems (e.g., Windows Server, Linux), VMWARE AWS or AZURE Cloud Storage
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough