Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 15.0 years
15 - 25 Lacs
Gurugram
Remote
Role & responsibilities Focus on migration of on-prem Market EDM solution to GCP Ensure the platform's scalability, reliability, and efficiency meets business and client requirements Eventual migration of capabilities of Market EDM into cloud native solutions • Data and functional requirements analysis Act as a mentor for other team members, advising on best practice, technology and processes Preferred candidate profile • SQL Server SME with at least 10 years of experience, including SSIS and SSRS • At least 5 years Market EDM experience and expertise • ETL design and implementation experience • Fluent in T-SQL and python • Strong experience of application modernisation and cloud migration programs • Experience with the development and deployment of large-scale, complex technology platforms • Deep understanding of GCP products across database, serverless, containerization and API • Strong documentation, communication and collaboration skills • Focus on simplicity, automation and observability • Expertise in Python, Git, Jira • Bachelor's degree in Computer Science or related fiel
Posted 2 days ago
9.0 - 14.0 years
10 - 20 Lacs
Gurugram
Work from Office
Key Responsibilities : Executive support to GCDO 1. Act as a primary point of contact for the GCDOs officemanaging calendars, scheduling meetings, and coordinating travel and events. 2. Prepare briefings, presentations, and follow-up documentation for leadership engagements and external meetings. 3. Handle sensitive information with discretion, maintaining confidentiality and professionalism at all times. 4. Draft correspondence and communications on behalf of the GCDO as needed. 5. Assist in managing action items, priorities, and deadlines from executive meetings and strategic planning sessions. 6. Support the GCDO team in managing the project portfolio across multiple business units. Project Coordination : 1. Assist in creating and updating project schedules, budgets, status reports, and performance dashboards. 2. Support GCDO to track action items, milestones, and deliverables across multiple concurrent projects. 3. Liaise between project teams and external/internal stakeholders to ensure alignment and effective communication throughout the project lifecycle. 4. Coordinate meetings, workshops, training sessions, and documentation across project phases. 5. Monitor and escalate project risks, issues, and changes to ensure timely resolution. 6. Assist in organizing and facilitating governance meetings, steering committees, and project reviews. PMO Functions : 1. Support the establishment and continuous improvement of project management standards, templates, and best practices across the organization. 2. Track key performance indicators (KPIs) and compile insights into monthly or quarterly reports. 3. Prepare executive-level dashboards and reports to track project health, risks, and dependencies. 4. Assist in project prioritization, resource planning, and allocation discussions with leadership. 5. Facilitate and document PMO meetings, steering committee reviews, and project retrospectives. Project Administration (PA) : 1. Manage administrative tasks including invoice processing, PO tracking, time logging, and cost tracking. 2. Assist in the procurement and inventory management processes related to project resources. 3. Contribute to risk management activities by maintaining risk registers and supporting mitigation planning. Minimum Qualifications : Bachelor’s degree in Project Management, Business Administration, Information Technology, or a related field. 5+ years of experience in project coordination, executive assistance, or a similar support role within a digital or IT-focused environment. Skills : 1. Proficiency in project management tools (e.g., Microsoft Project, JIRA, Smartsheet) and collaboration platforms (e.g., Teams, Confluence). 2.Strong MS Office skills, particularly in Excel, PowerPoint, and Outlook. 3. Excellent written and verbal communication skills with a high level of professionalism. 4. Proven ability to handle confidential information with integrity and discretion. 5. Experience supporting senior leadership in a corporate or digital transformation environment. 6. Understanding of digital technologies, transformation strategies, and enterprise governance models.
Posted 2 days ago
7.0 - 11.0 years
18 - 32 Lacs
Pune
Work from Office
Role & responsibilities Key critical role required to be the techno-functional person. Coordinate and embed technical standards, increase the maturity of the team, drive best practices, and ensure quality reliable, robust solutions are driven throughout the engineering team. Take accountability for delivery of systems changes to the assigned POD and drive artifacts delivery as per plan. Provide management support across other Work stream in project as required. Work closely with diverse Business and IT teams & gain a detailed understanding of the Business requirements. Identify solution options and perform solution option evaluations to get the best solution agreed. Supporting IT teams in issue resolution around the agreed solution to ensure that the release schedule remains on track and that functional issues are solved through appropriate, agreed solutions. Embed Agile and Devops practices within the team. Work with the key stakeholders to deliver innovative solutions within the bank. Proactively push team to innovate with ideas to drive the department forward. Ensuring quality of deliverables and code. Establish industry standard practices of code reviews, quality checks etc. Removal of manual processes within the team and automate. Provide solutions for performance tuning of applications and improvement on the same. Proven proactive reporting of issues and follow through to resolution. Familiarity with HSBC Risk-Based Project Management (RBPM) and Agile Project Methodology Experience of working with global distributed team. Innovative and independent thinker Knowledge of effective delivery practices within an agile/scrum delivery cycle Data analysis tool expertise (e.g. SQL knowledge, database tools) Preferred candidate profile 7-10+ years experience in large-scale software development Good to have knowledge of Google Cloud Platform Appreciation of Liquidity, IRR and NCCR reporting (NSFR, LCR, NCCR Capital calculation methodology (STANDARDISED) Credit Risk Mitigation methods (Netting, Collateral) Specialist in-depth experience of investment banking, specifically OTCs, Credit Derivatives, Securities Financing Transactions (Repos and Stock, Borrow, Lending) and Exchange Traded Products is essential. Demonstrable experience of data sourcing, liaising with upstream systems and ensuring end-to-end clarity of requirements across multiple teams Experience within IT/Finance/Risk change delivery programmes/projects is required. Knowledge of SQL Queries. Demonstrable experience in the areas described above in a medium to large software delivery environment. A proven track record of the successful delivery of large-scale projects in previous domains to tight deadlines Excellent interpersonal skills, written and verbal. Effective communication to team members and to senior stakeholders Delivery of releases in an agile delivery environment as well as traditional waterfall delivery High levels of enthusiasm and a desire to deliver the best quality products possible along with maintaining very high service levels. Strong technical skills to fully understand the development process. Perks and benefits
Posted 2 days ago
6.0 - 11.0 years
11 - 18 Lacs
Mumbai, Goregaon
Work from Office
Greeting from Kotak Life Insurance Interested candidate can share their cv on 8828395189 Job Title: Cloud Architect Location: Goregaon Mumbai Job Summary: We are seeking a highly skilled and experienced Cloud Architect to lead the design, implementation, and management of our cloud infrastructure. The ideal candidate will have deep technical knowledge of cloud platforms, strong problem-solving skills, and the ability to align cloud solutions with business needs. You will play a critical role in shaping the companys cloud strategy, ensuring scalability, reliability, and security. Key Responsibilities: Design and implement secure, scalable, and cost-effective cloud architecture solutions. Lead cloud migration projects from on premise or other platforms. Develop and document architectural blueprints and detailed design specifications. Collaborate with DevOps, security, and development teams to integrate cloud solutions with existing systems. Establish best practices for cloud adoption, governance, and performance optimization. Evaluate and recommend new tools, technologies, and practices for cloud environments. Monitor cloud environments and ensure high availability, resilience, and compliance. Troubleshoot issues and provide technical guidance to cross-functional teams. Stay current on cloud trends and emerging technologies Required Qualifications: Bachelors degree in Computer Science, Engineering, or a related field. 10+ years of experience in IT, with at least 5 years in cloud architecture or engineering. Hands-on experience with one or more major cloud platforms (AWS) Strong understanding of cloud security, networking, containers, and automation. Knowledge of CI/CD pipelines and DevOps practices. Preferred Qualifications: Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert, Google Professional Cloud Architect). Experience with Kubernetes, Docker, and micro services architecture. Excellent communication and stakeholder management skills.
Posted 2 days ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Title: Partner Sales Manager Job location: Bengaluru/Chennai/Hyderabad/Pune Experience: 710 years in sales, channel sales, partner operation management Shift timings: CST or PST Company website : www.ascendion.com About the Role We are seeking a dynamic and experienced Partner Sales Manager to drive our sales efforts with our partner ecosystem. You will work in a fast-paced environment that is on the leading-edge of AI/Gen AI digital technology solutions. As the Partner Sales and Operation Manager, you will be responsible for collaborating with Ascendion Sales and Solutioning Teams to develop, pursue joint selling relationship with our strategic alliances and partners. You will lead and refine the co-selling process framework and direct a team of sales operation staff to maximize our collaboration and success with our partnership that aligns with our companys goals. This role requires a combination of situational awareness, excellent communication skills, and detail orientation. Relationship Management: Build and maintain strong relationships with sales and partner teams from our alliance partners, and Ascendion sales team and internal stakeholders. Partner Value Proposition: Develop compelling value proposition along with key stakeholders that inspires partners to promote our offering Sales Execution: Be a bridge between field sales account teams and partner account teams in order to drive winning outcomes for all along with our customers Collaboration: Serve as the go-to subject matter expert for sales team on collaboration with partners to generate demand & pipeline that will lead to new bookings Process Planning: Continuously refine a comprehensive sales co-sell process and best practices to drive revenue growth through our alliances partnership. Team Coaching: provide training and coaching and management and execution of joint projects and provide regular reports to senior leadership Reporting: Track and report on partnership sales performance, pipeline development, and key metrics to senior management. Experience and Qualifications: Experience: 7-10 years of experience in sales, business development, or partner management in cloud technology targeting enterprise clients, preferably with a focus on AWS or Microsoft. Knowledge: Experience in sales cycle progression and procurement processes in partner-led deals in order to ensure timely closure of deals against critical quarterly targets Influence: Strong social skills including the ability to collaborate and influence from a wide variety of sources/resources internal and external Skills: Strong negotiation, communication, and interpersonal skills. Leverage a wide range of skills (listening, questioning, qualifying, gaining commitment, negotiating, summarizing, closing, etc.) to achieve targets Ability: Work in onshore-offshore model (US/India) and collaborate with multi-functional teams and across levels independently Education: Bachelors degree in Business, Marketing, or a Software related field. Certifications: AWS, Azure, Snowflake sales certifications are highly desirable. You should possess an ability to network, build relationships, and explain the value proposition of our digital technologies. You should also have sales or marketing experience and should have been in customer-facing situations in the recent past.
Posted 3 days ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Role & responsibilities This SRO Engineer is part of GPU Cloud Business Unit. Here the team doesn't have difference in L0-L3 level of resources. This Engineer will be helping in building clusters from scratch, supporting, monitoring and troubleshooting them. SRO Engineers should be able to automate their daily tasks using Ansible playbooks. They will be supporting the firmware upgradations, failure analysis, read logs etc. Any incidents coming, a ticket will be raised on it. SRO Engineers will take up and resolve the same. Should be available to respond to any Alert/Alarm on the incidents. Should be documenting and do additional troubleshooting as and when required. Documenting done to share the details with the next shift SRO Engineers to refer. They should be able to understand any hardware failure or configuration issues that come up as incidents. Preferred candidate profile • 5+ Years of hands-on Linux Administration experience. • Should have understanding of Hardware clusters. • Should have experience in any Cloud Service Provider (CSP) environment. (AWS/Azure/Oracle/GCP). • Ansible/Python Scripting experience. • Should be a reliable team player. • Should have proven experience in SSH, DNS, DHCP, Bare Metal etc.
Posted 3 days ago
8.0 - 10.0 years
14 - 24 Lacs
Pune, Mumbai (All Areas)
Work from Office
Job Title: GCP Cloud Engineer Experience: 8 to 9 Years (5+Mandatory in GCP) Location: Mumbai & Pune Employment Type: (Full-Time) Notice Period: Immediate Joiners Preferred Job Description: We are looking for a skilled GCP Cloud Engineer with a minimum of 5+ years of hands-on experience in designing, implementing, and managing cloud infrastructure on Google Cloud Platform (GCP) . The ideal candidate must have strong expertise in Terraform for infrastructure as code (IaC) and should be well-versed with GCP-native services, cloud networking, automation, and CI/CD processes. Required Skills & Qualifications: 5+ years of experience in GCP cloud engineering (mandatory) Strong hands-on experience with Terraform Proficient in GCP services including Compute Engine, VPC, IAM, GKE, Cloud Storage, Cloud Functions, etc. Solid understanding of cloud networking, security, and automation tools Experience with CI/CD tools and DevOps practices Familiarity with scripting languages (e.g., Python, Shell) Excellent problem-solving and communication skills Thanks & Regards Chetna Gidde | HR Associate-Talent Acquisition | chetna.gidde@rigvedit.com |
Posted 3 days ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Databuzz is Hiring for Java Developer-3+yrs-Bangalore/Chennai/Hyderabad-Hybrid Please mail your profile to haritha.jaddu@databuzzltd.com with the below details, If you are Interested. About DatabuzzLTD: Databuzz is One stop shop for data analytics specialized in Data Science, Big Data, Data Engineering, AI & ML, Cloud Infrastructure and Devops. We are an MNC based in both UK and INDIA. We are a ISO 27001 & GDPR complaint company. CTC - ECTC - Notice Period/LWD - (Candidate serving notice period will be preferred) Position: Java Developer Location: Bangalore/Chennai/Hyderabad Exp -3+ yrs Mandatory skills : Candidate should have Overall 3-7 years of professional experience Candidate should have experience in developing microservices and cloud native apps using JavaJ2EE, REST APIs ,Spring Core, Spring MVC Framework ,Spring Boot Framework ,JPA Java Persistence API Or any other ORM Spring Security and similar tech stacks Open source and proprietary Proficiency in Java and related technologies eg Spring, Spring Boot ,Hibernate ,JPA Experience in working with Unit testing using framework such as Junit, Mockito ,JBehave Build and deploy services using Gradle Maven Jenkins etc. as part of CICD process Experience working in Google Cloud Platform Experience with any Relational Database like Oracle ,SQL server, MySQL, PostgreSQL etc. Build and deploy components as part of CICD process Experience in building and consuming RESTful APIs Experience with version control systems eg Git Solid understanding of software development life cycle SDLC and agile methodologies Regards, Haritha Talent Acquisition specialist haritha.jaddu@databuzzltd.com
Posted 3 days ago
6.0 - 11.0 years
20 - 35 Lacs
Gurugram
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon Payroll: BCforward Work Mode: Hybrid JD Description: Skills: React / React JS; Kotlin; Spring Boot; REST Web Services; JUnit; Git (GitHub, GitLab, BitBucket, SVN); GCP Full stack developer with expertise in React -React, Java/ Kotlin, Springboot , Restful Apis,Junit,GitHub Actions, Postgres, GCP Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 15-Days joiners at most. All the best
Posted 3 days ago
7.0 - 11.0 years
10 - 20 Lacs
Indore, Pune
Work from Office
Exp Range 7+ Years Location Pune/Indore Notice – Immediate Senior DevOps Engineer Location: Indore, Pune – work from office. Job Summary: We are seeking an experienced and enthusiastic Senior DevOps Engineer with 7+ years of dedicated experience to join our growing team. In this pivotal role, you will be instrumental in designing, implementing, and maintaining our continuous integration, continuous delivery (CI/CD) pipelines, and infrastructure automation. You will champion DevOps best practices, optimize our cloud-native environments, and ensure the reliability, scalability, and security of our systems. This role demands deep technical expertise, an initiative-taking mindset, and a strong commitment to operational excellence. Key Responsibilities: CI/CD Pipeline Management: Design, build, and maintain robust and automated CI/CD pipelines using GitHub Actions to ensure efficient and reliable software delivery from code commit to production deployment. Infrastructure Automation: Develop and manage infrastructure as code (IaC) using Shell scripting and GCloud CLI to provision, configure, and manage resources within Google Cloud Platform (GCP) . Deployment Orchestration: Implement and optimize deployment strategies, leveraging GitHub for version control of deployment scripts and configurations, ensuring repeatable and consistent releases. Containerization & Orchestration: Work extensively with Docker for containerizing applications, including building, optimizing, and managing Docker images. Artifact Management: Administer and optimize artifact repositories, specifically Artifactory in GCP , to manage dependencies and build artifacts efficiently. System Reliability & Performance: Monitor, troubleshoot, and optimize the performance, scalability, and reliability of our cloud infrastructure and applications. Collaboration & Documentation: Work closely with development, QA, and operations teams. Utilize Jira for task tracking and Confluence for comprehensive documentation of systems, processes, and best practices. Security & Compliance: Implement and enforce security best practices within the CI/CD pipelines and cloud infrastructure, ensuring compliance with relevant standards. Mentorship & Leadership: Provide technical guidance and mentorship to junior engineers, fostering a culture of learning and continuous improvement within the team. Incident Response: Participate in on-call rotations and provide rapid response to production incidents, perform root cause analysis, and implement preventative measures. Required Skills & Experience (Mandatory - 7+ Years): Proven experience (7+ years) in a DevOps, Site Reliability Engineering (SRE), or similar role. Expert-level proficiency with Git and GitHub , including advanced branching strategies, pull requests, and code reviews. Experience designing and implementing CI/CD pipelines using GitHub Actions. Deep expertise in Google Cloud Platform (GCP) , including compute, networking, storage, and identity services. Advanced proficiency in Shell scripting for automation, system administration, and deployment tasks. Strong firsthand experience with Docker for containerization, image optimization, and container lifecycle management. Solid understanding and practical experience with Artifactory (or similar artifact management tools) in a cloud environment. Expertise in using GCloud CLI for automating GCP resource management and deployments. Demonstrable experience with Continuous Integration (CI) principles and practices. Proficiency with Jira for agile project management and Confluence for knowledge sharing. Strong understanding of networking concepts, security best practices, and system monitoring. Excellent critical thinking skills and an initiative-taking approach to identifying and resolving issues. Nice-to-Have Skills: Experience with Kubernetes (GKE) for container orchestration. Familiarity with other Infrastructure as Code (IaC) tools like Terraform . Experience with monitoring and logging tools such as Prometheus, Grafana, or GCP's Cloud Monitoring/Logging. Proficiency in other scripting or programming languages (e.g., Python, Go) for automation and tool development. Experience with database management in a cloud environment (e.g., Cloud SQL, Firestore). Knowledge of DevSecOps principles and tools for integrating security into the CI/CD pipeline. GCP Professional Cloud DevOps Engineer or other relevant GCP certifications. Experience with large-scale distributed systems and microservices architectures.
Posted 3 days ago
4.0 - 9.0 years
8 - 18 Lacs
Hyderabad, Chennai
Work from Office
About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai, Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelors or Masters degree in Computer Science, Machine Learning, Data Science, or related field (Ph.D. is a plus) Minimum of 5 hands-on ML/AI projects, preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy, and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering Why Join Us: Work on impactful, real-world AI challenges Collaborate with a passionate and innovative team Opportunities for career advancement and learning Flexible work environment (remote/hybrid options) Competitive compensation and benefits
Posted 3 days ago
3.0 - 8.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education:Job Summary :We are seeking a motivated and talented GCP & Kubernetes Engineer to join our growing cloud infrastructure team. This role will be a key contributor in building and maintaining our Kubernetes platform, working closely with architects to design, deploy, and manage cloud-native applications on Google Kubernetes Engine (GKE).Responsibilities:Extensive hands-on experience with Google Cloud Platform (GCP) and Kubernetes implementations.Demonstrated expertise in operating and managing container orchestration engines such as Dockers or Kubernetes.Knowledge or experience on various Kubernetes tools like Kubekafka, Kubegres, Helm, Ingress, Redis, Grafana, and PrometheusProven track record in supporting and deploying various public cloud services.Experience in building or managing self-service platforms to boost developer productivity.Proficiency in using Infrastructure as Code (IaC) tools like Terraform.Skilled in diagnosing and resolving complex issues in automation and cloud environments.Advanced experience in architecting and managing highly available and high-performance multi-zonal or multi-regional systems.Strong understanding of infrastructure CI/CD pipelines and associated tools.Collaborate with internal teams and stakeholders to understand user requirements and implement technical solutions.Experience working in GKE, Edge/GDCE environments.Assist development teams in building and deploying microservices-based applications in public cloud environments.Technical Skillset:Minimum of 3 years of hands-on experience in migrating or deploying GCP cloud-based solutions.At least 3 years of experience in architecting, implementing, and supporting GCP infrastructure and topologies.Over 3 years of experience with GCP IaC, particularly with Terraform, including writing and maintaining Terraform configurations and modules.Experience in deploying container-based systems such as Docker or Kubernetes on both private and public clouds (GCP GKE).Familiarity with CI/CD tools (e.g., GitHub) and processes.Certifications:GCP ACE certification is mandatory.CKA certification is highly desirable.HashiCorp Terraform certification is a significant plus. Qualification 15 years full time education
Posted 3 days ago
10.0 - 11.0 years
24 - 30 Lacs
Kochi
Work from Office
7+ years in data architecture,3 years in GCP environments. Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services. data warehousing, data lakes, and real-time data pipelines. SQL, Python
Posted 4 days ago
4.0 - 9.0 years
20 - 25 Lacs
Bengaluru
Hybrid
The Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative problem solvers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry standard processes, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilons success story Candidate will be a member of the Skynet Development Team responsible for developing, managing, and implementing multi-cloud Infrastructure as Code framework (IaC) for the product engineering group using GCP, AWS, Azure, Terraform and Ansible . Why we are looking for you: You have experience in Cloud Engineering and use Terraform to develop Infrastructure as code. You have a strong hands-on experience in GCP, AWS and Azure. You enjoy new challenges and are solution oriented. You have a flair in writing scripts in Python. What you will enjoy in this role: As part of the Epsilon Product Engineering team, the pace of the work matches the fast-evolving demands of Fortune 500 clients across the globe As part of an innovative team that is not afraid to try new things, your ideas will come to life in digital marketing products that support more than 50% automotive dealers in the US The open and transparent environment that values innovation and efficiency. Opportunity to explore various GCP, AWS & Azure services at depth and enrich your experience on these fast-growing Cloud Services. Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What you will do: Evaluate services of GCP and use Terraform to design and develop re-usable Infrastructure as Code modules. Work across product engineering team to learn about their deployment challenges and help them overcome by delivering reliable solution. Be part of an enriching team and tackle real Production engineering challenges. Improve your knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Qualification BE / B.Tech / MCA No correspondence course 2 - 8 years of experience. Proven senior-level experience designing, developing, and handling complex infrastructure-as-code solutions using Terraform specifically for Google Cloud Platform. This includes extensive work with GCP modules, state management, and standard methodologies for large-scale environments. At least 2+ years of experience of working on GCP (primary). Must have strong experience of working with Terraform. Experience in working on GIT (or equivalent source control) Experience in AWS, Azure & Python will be an advantage.
Posted 4 days ago
5.0 - 8.0 years
12 - 15 Lacs
Pune
Remote
Job Title: JAVA Developer Required Experience: 7+ years Job Overview: We are looking for a passionate Java developer with 7 years of experience to join our dynamic team. The ideal candidate should have a solid understanding of Java programming, experience with web frameworks, and a strong desire to develop efficient, scalable, and maintainable applications. Key Responsibilities: Design, develop, and maintain scalable and high-performance Java applications. Write clean, modular, and well-documented code that follows industry best practices. Collaborate with cross-functional teams to define, design, and implement new features. Debug, test, and troubleshoot applications across various platforms and environments. Participate in code reviews and contribute to the continuous improvement of development processes. Work with databases such as MySQL, and PostgreSQL to manage application data. Implement and maintain RESTful APIs for communication between services and front-end applications. Assist in optimizing application performance and scalability. Stay updated with emerging technologies and apply them in development projects when appropriate. Requirements: 2+ years of experience in Java development. Strong knowledge of Core Java, OOP concepts, struts, and Java SE/EE. Experience with Spring Framework (Spring Boot, Spring MVC) or Hibernate for developing web applications. Familiarity with RESTful APIs and web services. Proficiency in working with relational databases like MySQL or PostgreSQL. Familiarity with JavaScript, HTML5, and CSS3 for front-end integration. Basic knowledge of version control systems like Git. Experience with Agile/Scrum development methodologies. Understanding of unit testing frameworks such as JUnit or TestNG. Strong problem-solving and analytical skills. Experience with Kafka Experience with GCP Preferred Skills: Experience with DevOps tools like Docker, Kubernetes, or CI/CD pipelines. Familiarity with microservice architecture and containerization. Experience with NoSQL databases like MongoDB is a plus. Company Overview: Aventior is a leading provider of innovative technology solutions for businesses across a wide range of industries. At Aventior, we leverage cutting-edge technologies like AI, ML Ops, DevOps, and many more to help our clients solve complex business problems and drive growth. We also provide a full range of data development and management services, including Cloud Data Architecture, Universal Data Models, Data Transformation & and ETL, Data Lakes, User Management, Analytics and visualization, and automated data capture (for scanned documents and unstructured/semi-structured data sources). Our team of experienced professionals combines deep industry knowledge with expertise in the latest technologies to deliver customized solutions that meet the unique needs of each of our clients. Whether you are looking to streamline your operations, enhance your customer experience, or improve your decision-making process, Aventior has the skills and resources to help you achieve your goals. We bring a well-rounded cross-industry and multi-client perspective to our client engagements. Our strategy is grounded in design, implementation, innovation, migration, and support. We have a global delivery model, a multi-country presence, and a team well-equipped with professionals and experts in the field.
Posted 4 days ago
8.0 - 12.0 years
20 - 25 Lacs
Bengaluru
Hybrid
We are seeking a highly skilled and experienced Cloud Data Engineer to join our dynamic team. You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure on GCP/AWS/Azure, ensuring data is accessible, reliable, and available for business use. Key Responsibilities: Data Pipeline Development: Design, develop, and maintain data pipelines using GCP/AWS/Azure services such as Dataflow, Dataproc, BigQuery, and Cloud Storage. Data Integration: Work on integrating data from various sources (structured, semi-structured, and unstructured) into GCP/AWS/Azure environments. Data Modeling: Develop and maintain efficient data models in BigQuery to support analytics and reporting needs. Data Warehousing: Implement data warehousing solutions on GCP, optimizing performance and scalability. ETL/ELT Processes: Build and manage ETL/ELT processes using tools like Apache Airflow, Data Fusion, and Python. Data Quality & Governance: Implement data quality checks, data lineage, and data governance best practices to ensure high data integrity. Automation: Automate data pipelines and workflows to reduce manual effort and improve efficiency. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet business needs. Optimization: Continuously monitor and optimize the performance of data pipelines and queries for cost and efficiency. Security: Ensure data security and compliance with industry standards and best practices. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 8+ years of experience in data engineering, with at least 2 years working with GCP/Azure/AWS Technical Skills: Strong programming skills in Python, SQL,Pyspark and familiarity with Java/Scala. Experience with orchestration tools like Apache Airflow. Knowledge of ETL/ELT processes and tools. Experience with data modeling and designing data warehouses in BigQuery. Familiarity with CI/CD pipelines and version control systems like Git. Understanding of data governance, security, and compliance. Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work in a fast-paced environment and manage multiple priorities. Preferred Qualifications: Certifications: GCP Professional Data Engineer or GCP Professional Cloud Architect certification. Domain Knowledge: Experience in the finance, e-commerce, healthcare domain is a plus.
Posted 4 days ago
12.0 - 18.0 years
35 - 60 Lacs
Hyderabad
Hybrid
Senior Manager, Site Reliability Engineering Hyderabad Shift Timings: 1.00 PM - 10.00 PM Duties and Responsibilities: People Leader Responsibility Position will manage 5 to 10 engineers both directly and indirectly. The engineers will include Site Reliability Engineers, Observability Engineers, Performance Engineers, DevSecOps Engineers, and others These individuals will vary from entry level to senior titles. Responsibilities: Lead and manage a team of Site Reliability Engineers, providing mentorship, guidance, and support to ensure the team's success. Develop and implement strategies for improving system reliability, scalability, and performance. Establish and enforce SRE best practices, including monitoring, alerting, error budget tracking, and post-incident reviews. Collaborate with software engineering teams to design and implement reliable, scalable, and efficient systems. Implement and maintain monitoring and alerting systems to proactively identify and address issues before they impact customers. Implement performance engineering processes to ensure reliability of Products, Services, & Platforms. Drive automation and tooling efforts to streamline operations and improve efficiency. Continuously evaluate and improve our infrastructure, processes, and practices to ensure reliability and scalability. Provide technical leadership and guidance on complex engineering projects and initiatives. Stay up-to-date with industry trends and emerging technologies in site reliability engineering and cloud computing. Other duties as assigned. Required Work Experience: 10+ years of experience in site reliability engineering or a related field. 5+ years of experience in a leadership or management role, managing a team of engineers. 5+ years of hands on working experience with Dynatrace (administrative, deployment, etc). Strong understanding of DevSecOps principles. Strong understanding of cloud computing principles and technologies, preferably AWS, Azure, or GCP. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Proven track record of driving projects to successful completion in a fast-paced, dynamic environment. Experience with driving cultural change in technical excellence, quality, and efficiency. Experience managing and growing technical leaders and teams. Constructing, interpreting, and applying metrics to your work and decision making, able to use those metrics to identify correlation between drivers and results, and using that information to drive prioritization and action Preferred Work Experience: Proficiency in programming/scripting languages such as Python, Go, or Bash. Experience with infrastructure as code tools such as Terraform or CloudFormation. Deep understanding of Linux systems administration and networking principles. Experience with containerization and orchestration technologies such as Docker and Kubernetes. Experience or familiarity with IIS, HTML, Java, Jboss. Knowledge: Site Reliability Engineering Principles DevSecOps Principles Agile (SAFe) Healthcare industry ITLT ServiceNow Jira/Confluence Skills: Strong communication skills Leadership Programming languages (see above) Project Management Mentorship Continuous learning
Posted 4 days ago
8.0 - 11.0 years
25 - 32 Lacs
Hyderabad
Work from Office
This role is a full stack developer who provides the technical expertise for the implementation DevSecOps practices. Principal DevSecOps Engineer is a Senior technical expert role focused on DevSecOps engineering practices and enable automation across the enterprise. This purpose of this role will not only provide technical directions but also oversight DevSecOps enablement and enhancements to deliver business applications in AWS/GCP Cloud. DevSecOps Prinipal engineer will also be accountable for the successful implementation, deployments, and configuration management along with development of new automation and agile practices (CI/CD). Lastly, this resource will work with DevOps engineers, architects and team of developers to enhance devops standards across the organization.
Posted 5 days ago
8.0 - 13.0 years
35 - 55 Lacs
Bengaluru
Hybrid
8+ yrs exp in Cloud Architect with GCP. Expertise in GCP services like Compute Engine, Kubernetes. understanding of cloud security. Proficiency in DevOps tools. looking for immediate joiners
Posted 5 days ago
8.0 - 13.0 years
18 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Senior Principal Consultant – Google Cloud Engineer We are seeking a highly skilled and visionary Google Cloud Engineer – Senior Principal Consultant to architect, design, and implement cutting-edge cloud infrastructure and automation solutions on Google Cloud Platform (GCP) . The ideal candidate will bring deep technical expertise, leadership, and strategic insight to drive enterprise-level cloud adoption, modernization, and optimization initiatives. Responsibilities Design and implement cloud-native infrastructure solutions on GCP using Terraform, Deployment Manager, or Pulumi Build and manage CI/CD pipelines using Cloud Build, GitHub Actions, Jenkins, or Spinnaker Architect and manage secure, scalable environments using GKE, Cloud Run, Compute Engine, and Cloud Functions Implement monitoring, logging, and alerting using Google Cloud Operations Suite (formerly Stackdriver) Automate cloud operations using Python, Bash, or Go, following SRE and DevOps best practices Collaborate with solution architects, security teams, and application teams to meet performance, security, and compliance goals Participate in cloud migration and modernization projects, including lift-and-shift, re-platforming, and refactoring Review and improve cost optimization, scalability, and resiliency strategies Lead proof-of-concepts (PoCs), assessments, and workshops with clients Mentor engineering teams and support pre-sales efforts for technical proposals. Lead initiatives related to cloud networking, storage, and security, ensuring compliance with industry standards. Author and review technical documentation for infrastructure processes, providing clear guidelines for future implementations. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor's degree in information technology, Computer Science, or a related field. Deep hands-on experience with Google Cloud Platform (GCP) and core services (GKE, VPC, IAM, Cloud SQL, Pub/Sub, etc.) Deep understanding of cloud security, access controls, and compliance frameworks. Experience with containers and orchestration tools such as Docker, Kubernetes, and Helm Solid understanding of networking, security policies, and identity/access management in GCP Strong programming/scripting skills in Python, Go, or Shell Ability to independently solve complex problems while mentoring and collaborating with peers. Excellent problem-solving, debugging, and performance tuning skills Strong interpersonal and communication skills in both technical and business contexts Preferred Qualifications/ Skills Additional GCP certifications: DevOps Engineer, Security Engineer, or Network Engineer Experience in hybrid/multi-cloud environments Background in regulated industries such as finance, healthcare, or telecom GCP Certifications (Professional Cloud Architect, Cloud DevOps Engineer, etc.) Experience with multi-cloud environments or hybrid cloud architecture Strong knowledge of networking concepts, Kubernetes orchestration, and enterprise security Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 5 days ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role :Data Scientist Experience Required :5 to 10 yrs Work Location :Hyderabad/Bangalore/Chennai Required Skills, Python GCP AI Platform, BigQuery ML, Cloud AutoML, and Vertex AI. Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 5 days ago
4.0 - 9.0 years
14 - 20 Lacs
Hyderabad
Work from Office
Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************
Posted 6 days ago
7.0 - 12.0 years
30 - 35 Lacs
Noida
Hybrid
Job Title: Java Technical Lead / Developer (Full stack) Location: Noida Experience: 7-15 Years Key Responsibilities: Lead and mentor a team of Java developers through the full software development lifecycle. Architect, design, and develop scalable and secure Java-based applications. Collaborate with cross-functional teams including DevOps, QA, and Product Management. Conduct code reviews, enforce coding standards, and ensure high code quality. Translate business requirements into technical specifications and solutions. Drive adoption of best practices in design, development, and deployment. Troubleshoot and resolve complex technical issues in development and production. Stay current with emerging technologies and propose their adoption where relevant. Required Skills & Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. 10-12 years of hands-on experience in Java development with react.js. Strong expertise in Java 11+ / Java 17 , React.js Spring Boot , Spring Cloud , and Microservices architecture . Experience with RESTful APIs , JPA/Hibernate , and SQL/NoSQL databases (MySQL, PostgreSQL, MongoDB). Proficiency in CI/CD tools (Jenkins, GitHub Actions), Docker , and Kubernetes . Familiarity with cloud platforms (AWS, Azure, or GCP). Solid understanding of design patterns , system architecture , and performance tuning . Experience with Agile/Scrum methodologies . Preferred Skills: Exposure to Reactive Programming (Spring WebFlux, Project Reactor). Experience with GraphQL , Kafka , or gRPC . Knowledge of DevSecOps and application security best practices . Familiarity with observability tools (ELK, Prometheus, Grafana). What We Offer: Competitive compensation and performance-based bonuses. Flexible work environment and hybrid/remote options. Opportunities for leadership, innovation, and continuous learning. A collaborative and inclusive work culture.
Posted 6 days ago
0.0 - 5.0 years
2 - 6 Lacs
New Delhi, Gurugram, Delhi / NCR
Work from Office
CUSTOMER SERVICE ROLE FOR INTERNATIONAL PROCESS KAJAL - 8860800235 TRAVEL/BANKING/TECHNICAL GRAD/UG/FRESHER/EXPERIENCE SALARY DEPENDING ON LAST TAKEHOME(UPTO 5 LPA) LOCATION - GURUGRAM/ NOIDA WFO, 5 DAYS, 24*7 SHIFTS CAB+ INCENTIVES Required Candidate profile GOOD COMMUNICATION SKILLS IMMEDIATE JOINERS SHOULD BE WILLING TO DO 24*7 SHIFTS
Posted 6 days ago
4.0 - 9.0 years
14 - 20 Lacs
Hyderabad
Work from Office
Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Job Title: GCP Data Engineer Overview: We are looking for a skilled GCP Data Engineer with 4 to 5 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities: Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and implement data and semantic interoperability specifications. Work closely with business teams to define and scope requirements. Analyze existing systems to identify appropriate data sources and drive continuous improvement. Implement and continuously enhance automation processes for data ingestion and data transformation. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications: Overall 4-5 years of hands-on experience as a Data Engineer, with at least 3 years of direct GCP Data Engineering experience . Strong SQL and Python development skills are mandatory. Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies. Demonstrated knowledge and experience with Google Cloud BigQuery is a must. Experience with DataProc and Dataflow is highly preferred. Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks . Extensive experience in SQL across various database platforms. Experience in data mapping and data modeling . Familiarity with data analytics tools and best practices. Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell . Practical experience with Google Cloud services including but not limited to: Big Query , BigTable Cloud Dataflow , Cloud Data proc Cloud Storage , Pub/Sub Cloud Functions , Cloud Composer Cloud Spanner , Cloud SQL Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark ). Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker , etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. GCP Data Engineer Certification is highly preferred. Interested Candidates share updated cv to mail id dikshith.nalapatla@motivitylabs.com Total Experience: Relevant Experience : Current Role / Skillset: Current CTC: Fixed: Variables(if any): Bonus(if any): Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: Willing to work from office: ************* 5DAYS WORK FROM OFFICE MANDATORY ****************
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane