Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7 - 12 years
8 - 14 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Job Summary : We are seeking an experienced Neo4j Engineer with deep expertise in graph databases to join our team. The ideal candidate will design, develop, and deploy applications using Neo4j as the primary backend, while also working on the architecture of large-scale data environments. This role involves working with containerized microservices, leveraging AWS, and optimizing performance to deliver robust and scalable solutions. Key Responsibilities : - Neo4j Application Development - Design and develop applications that utilize Neo4j as the primary backend database. - Build and optimize graph database models for efficient querying and data representation. - Microservices Architecture - Develop and deploy containerized microservices using Java, Docker, and Kubernetes to enhance scalability and maintainability. - Contribute to the development of cloud-native applications with a focus on Python and Java. - AWS Deployment and Management - Utilize AWS services (e.g., EC2, ECS) to manage and deploy applications in the cloud, ensuring high availability and performance. - Implement best practices for secure, scalable, and resilient cloud environments. - Performance Optimization and Troubleshooting - Optimize Neo4j queries and configurations for handling large-scale data environments, ensuring efficiency and speed. - Monitor and troubleshoot Neo4j databases, performing migrations and ensuring data integrity across environments. - Data Architecture and Modeling - Contribute to the architecture and design of graph data models to support application needs. - Stay updated on best practices, tools, and advancements in graph database technology. - Cross-functional Collaboration - Collaborate with data scientists, engineers, and stakeholders to align Neo4j data models with application requirements. Required Skills and Experience : - 10+ years of experience in software engineering, with a strong focus on Neo4j and graph databases. - Expertise in Neo4j database design, data modeling, and graph querying. - Proficient in Java and Python programming for developing cloud-native applications. - Strong experience with containerization tools like Docker and orchestration platforms like Kubernetes. - Experience deploying and managing applications on AWS (EC2, ECS, RDS, etc.). - Demonstrated ability to optimize and troubleshoot Neo4j databases in large-scale environments. Preferred Qualifications : - Neo4j Certification is highly desirable. - Familiarity with CI/CD processes, automation tools, and DevOps best practices. - Knowledge of additional cloud platforms like GCP or Azure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 months ago
4 - 6 years
20 - 27 Lacs
Pune
Work from Office
Key Responsibilities: 1. Backend Development: - Architect and develop scalable backend systems using Python and Django. - Implement RESTful APIs and ensure seamless integration with frontend and mobile applications. - Conduct code reviews, optimise existing codebase, and implement best practices for maintainability. - Implement security best practices, including encryption and access controls. 2. Database & Data Management: - Design and manage databases, focusing on data integrity, security, and performance. - Write complex queries and optimise data processing to handle high-transaction volumes effectively. 3. AWS & DevOps: - Set up and maintain AWS services (EC2, S3, SQS, ECS, Lambda, RDS, etc.) to support scalable applications. - Implement CI/CD pipelines, monitoring, and logging solutions. - Ensure high system availability and data recovery through backup strategies and failover processes. 4. Team Collaboration: - Collaborate with cross-functional teams, including product managers and front-end developers, to deliver high-quality features. - Mentor junior developers, conduct code reviews, and help improve team coding standards. Requirements: 1. Technical Skills: - Strong expertise in Python and Django. - Proficiency in DevOps practices, including continuous integration and continuous deployment (CI/CD). - Solid experience with AWS services, including EC2, SQS, S3, ECS, RDS, Lambda, etc. - Familiarity with containerization (Docker) and orchestration (Kubernetes) is a plus. - Experience with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). 2. Professional Skills: - Proven ability to design and implement scalable backend solutions. - Strong problem-solving skills and the ability to work in a fast-paced startup environment. - Excellent written and verbal communication skills, with experience in documentation.
Posted 3 months ago
6 - 11 years
8 - 12 Lacs
Hyderabad
Work from Office
About the Role: Grade Level (for internal use): 10 This Senior Software Developer position will be focused on developing software related to our enterprise feed platforms. The candidate must have demonstrable experience with Java and database technologies and experience in the area of software development. The role requires design, development, testing, and support of platforms. Responsibilities Be part of an agile team that designs, develops, and maintains the enterprise feed systems and other related software applications Participate in design sessions for new product features and capabilities Produce technical design documents and participate technical walkthroughs. Engineer components and common services based on standard corporate development models, languages and tools. Collaborate effectively with technical and non-technical stakeholders. Support and maintain production environments Be part of an agile team that designs, develops, and maintains the enterprise feed systemsand other related software applications Requirements Bachelor's/PG degree in Computer Science, Information Systems or equivalent. A minimum of 6 + years of strong experience in application development using oracle or Microsoft technologies Proficient with software development lifecycle (SDLC) methodologies like Agile , Scrum, Test-driven development . Strong command of essential technologies: SQL , AWS EC2, S3, RDS, Redshift, AWS Lambda and Step functions, Airflow, Terraform , Python/ Java, T-sql, PL/SQL, Good experience with developing solutions involving relational database technologies on SQL Server and/or Oracle Platforms. Excellent verbal and written communication skills
Posted 3 months ago
14 - 19 years
16 - 20 Lacs
Bengaluru
Work from Office
Technical Architect - J48767 Architect will lead the design and solutioning of scalable, high-performance systems. The ideal candidate will have expertise in Microservices Architecture, AWS, and containerization technologies like Docker and Kubernetes. He will be responsible for modernizing production systems, creating POCs, and ensuring system performance. Key Responsibilities: Design and implement Microservices-based architectures. Lead solutioning for cloud-based applications, primarily using AWS (EC2, RDS, EKS, S3, etc.). Develop and validate POCs to demonstrate architecture viability. Optimize system performance and scalability. Work with stakeholders to design API gateways and other integrations. Modernize live production systems and ensure seamless transitions. Create architectural diagrams using Draw.io and collaborate using Confluence and Miro. Required Candidate profile Candidate Experience Should Be : 14 To 20 Candidate Degree Should Be : BE-Comp/IT,BTech-Comp/IT,ME-Comp/IT
Posted 3 months ago
5 - 10 years
20 - 25 Lacs
Bengaluru
Work from Office
Job Description AWS Data engineer Hadoop Migration We are seeking an experienced AWS Principal Data Architect to lead the migration of Hadoop DWH workloads from on-premise to AWS EMR. As an AWS Data Architect, you will be a recognized expert in cloud data engineering, developing solutions designed for effective data processing and warehousing requirements of large enterprises. You will be responsible for designing, implementing, and optimizing the data architecture in AWS, ensuring highly scalable, flexible, secured and resilient cloud architectures solving business problems and helps accelerate the adoption of our clients data initiatives on the cloud. Key Responsibilities: Lead the migration of Hadoop workloads from on-premise to AWS-EMR stack. Design and implement data architectures on AWS, including data pipelines, storage, and security. Collaborate with cross-functional teams to ensure seamless migration and integration. Optimize data architectures for scalability, performance, and cost-effectiveness. Develop and maintain technical documentation and standards. Provide technical leadership and mentorship to junior team members. Work closely with stakeholders to understand business requirements, and ensure data architectures meet business needs. Work alongside customers to build enterprise data platforms using AWS data services like Elastic Map Reduce (EMR), Redshift, Kinesis, Data Exchange, Data Sync, RDS , Data Store, Amazon MSK, DMS, Glue, Appflow, AWA Zero-ETL, Glue Data Catalog, Athena, Lake Formation, S3, RMS, Data Zone, Amazon MWAA, APIs Kong Deep understanding of Hadoop components, conceptual processes and system functioning and relative components in AWS EMR and other AWS services. Good experience on Spark-EMR Experience in Snowflake/Redshift Good idea of AWS system engineering aspects of setting up CI-CD pipelines on AWS using Cloudwatch, Cloudtrail, KMS, IAM IDC, Secret Manager, etc Extract best-practice knowledge, reference architectures, and patterns from these engagements for sharing with the worldwide AWS solution architect community Basic Qualifications: 10+ years of IT experience with 5+ years of experience in Data Engineering and 5+ years of hands-on experience in AWS Data/EMR Services (e.g. S3, Glue, Glue Catalog, Lake Formation) Strong understanding of Hadoop architecture, including HDFS, YARN, MapReduce, Hive, HBase. Experience with data migration tools like Glue, Data Sync. Excellent knowledge of data modeling, data warehousing, ETL processes, and other Data management systems. Strong understanding of security and compliance requirements in cloud. Experience in Agile development methodologies and version control systems. Excellent communication an leadership skills. Ability to work effectively across internal and external organizations and virtual teams. Deep experience on AWS native data services including Glue, Glue Catalog, EMR, Spark-EMR, Data Sync, RDS, Data Exchange, Lake Formation, Athena, AWS Certified Data Analytics – Specialty. AWS Certified Solutions Architect – Professional. Experience on Containerization and serverless computing. Familiarity with DevOps practices and automation tools. Experience in Snowflake/Redshift implementation is additionally preferred. Preferred Qualifications: Technical degrees in computer science, software engineering, or mathematics Cloud and Data Engineering background with Migration experience. Other Skills: A critical thinker with strong research, analytics and problem-solving skills Self-motivated with a positive attitude and an ability to work independently and or in a team Able to work under tight timeline and deliver on complex problems. Must be able to work flexible hours (including weekends and nights) as needed. A strong team player
Posted 3 months ago
3 - 8 years
30 - 35 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
Key Skills required are Java, JDBC, AWS (Lambda, ECS, API Gateway, RDS, SQS, SNS, DynamoDB, MQ, Step Functions), Should be aware of Terraform, SQL, PL/SQL, Jenkins GitLab, Standard development tools Location-Chennai,Bengaluru,Hyderabad,Pune,Noida
Posted 3 months ago
8 - 13 years
15 - 27 Lacs
Hyderabad
Hybrid
Role: AWS DevOps Lead Exp.: 8+ years Job Description: Overall, 8+ years of experience with 5 years of hands-on experience in deployment automation using IaC, Configuration management, Orchestration, Containerization, and running a complete CI/CD pipeline on both cloud and on-prem. Thorough understanding and hands-on skills in the below Infrastructure as Code: Terraform, AWS CloudFormation, Puppet. Source control: GitLab, GitHub. CI/CD: Jenkins, GitLab CICD. Containerization/Orchestration: Kubernetes, AWS ECS, AWS EKS, Docker. CDN: Akamai, AWS CloudFront. Monitoring: AWS Cloud watch, New Relic. Security: AWS Code Guru, Guard Duty, Security Hub, Snyk, Veracode, Rapid7. Programming/Scripting: Python, Shell scripting Good understanding of networking, security rules, firewalls, WAF, API gateways, and auto-scaling principles. Hands on experience using AWS (VPC, Subnets, ALB/NLB, RDS, ECS, SQS, Cognito, Lambda, Memcached ) is required. Understanding of Programming concepts and best practices is required. Experience dealing with production incidents in multi-tier application environment is required. Experience managing production workloads with Site Reliability Engineering best practices. Good understanding of various deployment strategies (Rolling updates, Blue/Green, Canary) Strong exposure on DevSecOps testing methods SAST, DAST, SCA is preferred.
Posted 3 months ago
5 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR / Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.
Posted 3 months ago
10 - 15 years
25 - 30 Lacs
Chennai, Hyderabad, Noida
Hybrid
Job title: AWS solution delivery expert Job Summary We are seeking a highly skilled AWS solution delivery expert to join our team, The ideal candidature should have extensive experience in designing, implementing, and managing AWS solutions for business teams across various regions. This role requires a deep understanding of AWS services, good problem-solving skills, and the ability to communicate technical concepts effectively to the required stakeholders. Job description to support AWS environment. Understand the business requirements from external collaborators to create end-to-end infrastructure design and deliver solutions on AWS cloud platform. Assist scientists and external collaborators throughout the entire infrastructure delivery process, guiding them in provisioning necessary AWS services and utilizing the NVS infrastructure efficiently. Proficiency in AWS services like EC2, VPC, S3, RDS, Lambda, CloudFront, EBS, EFS, ASG, IAM, ELB, Data sync, Route53,EKS, ECS etc. Experience on Linux/Windows OS and CICD implementation using Jenkins, Ansible, AWS Cloud formation,Terraform, containerization technologies such as docker, K8S and HPC. Proficiency with monitoring, logging, and troubleshooting tools for AWSsuch as CloudWatch, CloudTrail, Splunk, etc. Familiarity of data analysis toolsand programming/scripting languages such as SQL, Python, bash Technical expertise on Virtualization, On-premises Storage, Network Connectivity, Data Protection (Backup and recovery), DR & HA Expertise in designing skills using Draw.io for AWS infrastructure architecture. Hands on experience in implementing data transfer solutions from external vendor accounts to internal accounts and vice versa. Collaborate with cross-functional teams, including developers, architects, and project managers, to ensure successful solution delivery. Qualifications Candidate with bachelors degree and minimum of 5+ yrs. experience with strong technical background Proven experience as an AWS Solution Architect and DevOps, preferably within the life sciences industry Candidature with excellent communication and presentation skills to effectively collaborate with business and internal teams. Familiarity with regulatory frameworks and standards applicable to the life sciences industry, such as GxP, HIPAA and FDA regulations. Stay up to date with the latest AWS services, features, and best practices. AWS Certified Solutions Architect certification is an added advantage. Ability to work independently and collaboratively in a team environment. Candidature should be flexible to work during US time zone.
Posted 3 months ago
5 - 10 years
10 - 20 Lacs
Hyderabad
Work from Office
Responsibilities Understand business requirements to create end-to-end infrastructure design and deliver solutions on AWS cloud platform Assist scientists and external collaborators throughout the infrastructure delivery process, guiding them in provisioning necessary AWS services and utilizing the AWS infrastructure efficiently Collaborate with cross-functional teams, including developers, architects, and project managers, to ensure successful solution delivery Stay up to date with the latest AWS services, features, and best practices Requirements Bachelors degree and minimum of 5+ years. experience with strong technical background Proven experience as an AWS Solution Architect and DevOps, preferably within the life sciences industry Familiarity with regulatory frameworks and standards applicable to the life sciences industry, such as GxP, HIPAA, and FDA regulations Proficiency in AWS services like EC2, VPC, S3, RDS, Lambda, CloudFront, EBS, EFS, ASG, IAM, ELB, Data sync, Route53, EKS, ECS etc Experience on Linux/Windows OS and CICD implementation using Jenkins, Ansible, AWS Cloud formation, Terraform, containerization technologies such as docker, K8S and HPC Proficiency with monitoring, logging, and troubleshooting tools for AWS such as CloudWatch, CloudTrail, Splunk, etc Familiarity of data analysis tools and programming/scripting languages such as SQL, Python, bash Technical expertise on Virtualization, On-premises Storage, Network Connectivity, Data Protection (Backup and recovery), DR & HA Expertise in designing skills using Draw.io for AWS infrastructure architecture Hands-on experience in implementing data transfer solutions from external vendor accounts to internal accounts and vice versa Flexible to work during US time zone Ability to work independently and collaboratively in a team environment Excellent communication and presentation skills to effectively collaborate with business and internal teams B2+ English level proficiency Nice to have AWS Certified Solutions Architect certification is an added advantag
Posted 3 months ago
2 - 7 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Summary: AWS Engineer with approximately 2 years of experience to join our growing team. The ideal candidate will have hands-on experience in designing, deploying, and managing AWS cloud infrastructure. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud-based applications. Responsibilities: Design, implement, and maintain AWS cloud infrastructure using best practices. Deploy and manage applications on AWS services such as EC2, S3, RDS, VPC, and Lambda. Implement and maintain CI/CD pipelines for automated deployments. Monitor and troubleshoot AWS infrastructure and applications to ensure high availability and performance. Implement security best practices and ensure compliance with security policies. Automate infrastructure tasks using infrastructure-as-code tools (e.g., CloudFormation, Terraform). Collaborate with development and operations teams to resolve technical issues. Document infrastructure configurations and operational procedures. Participate in on-call rotations as needed. Optimize AWS costs and resource utilization. Required Skills and Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 2+ years of hands-on experience1 with AWS cloud services. Proficiency in AWS services such as EC2, S3, RDS, VPC, IAM, and Lambda. Experience with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Experience with CI/CD pipelines and tools (e.g., Jenkins, AWS CodePipeline). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Desire to learn new technologies. Perks & Benefits: Health and Wellness: Healthcare policy covering your family and parents. Food: Enjoy a scrumptious buffet lunch at the office every day (For Bangalore) Professional Development: Learn and propel your career. We provide workshops, funded online courses and other learning opportunities based on individual needs. Rewards and Recognitions: Recognition and rewards programs in place to celebrate your achievements and contributions. Why join Relanto? Health & Family: Comprehensive benefits for you and your loved ones, ensuring well-being. Growth Mindset: Continuous learning opportunities to stay ahead in your field. Dynamic & Inclusive: Vibrant culture fostering collaboration, creativity, and belonging. Career Ladder: Internal promotions and clear path for advancement. Recognition & Rewards: Celebrate your achievements and contributions. Work-Life Harmony: Flexible arrangements to balance your commitments. To find out more about us, head over to our and
Posted 3 months ago
6 - 11 years
8 - 14 Lacs
Bengaluru
Work from Office
Role:MSSQL Server & MongoDB Database Administrator (AWS Cloud) We are seeking a highly skilled SQL Server Database Administrator with expertise in AWS cloud environments. The ideal candidate will have a deep understanding of SQL Server database administration, Cloud-native technologies, and strong hands-on experience managing databases hosted on AWS. This role involves ensuring the performance, availability, and Security of SQL Server databases in a cloud-first environment. Key Responsibilities Hands-on expertise in data migration between databases on-prem to Mongo Atlas. Experience in creating clusters, databases and creating the users. SQL Server Administration:Install, configure, upgrade, and manage SQL Server databases hosted on AWS EC2, RDS. AWS Cloud Integration:Design, deploy, and manage SQL Server instances using AWS services like RDS, EC2, S3, CloudFormation, and IAM. Performance Tuning:Optimize database performance through query tuning, indexing strategies, and resource allocation within AWS environments. High Availability and Disaster Recovery:Implement and manage HA/DR solutions such as Always On Availability Groups, Multi-AZ deployments, or read replicas on AWS. Backup and Restore:Configure and automate backup strategies using AWS services like S3 and Lifecycle Policies while ensuring database integrity and recovery objectives. Security and Compliance:Manage database security, encryption, and compliance standards (e.g., GDPR, HIPAA) using AWS services like KMS and GuardDuty. Monitoring and Automation:Monitor database performance using AWS CloudWatch, SQL Profiler, and third-party tools. Automate routine tasks using PowerShell, AWS Lambda, or AWS SystemsManager. Collaboration:Work closely with development, DevOps, and architecture teams to integrate SQL Server solutions into cloud-based applications. Documentation:Maintain thorough documentation of database configurations, operational processes, and security procedures. Required Skills and Experience 6+ years of experience in SQL Server database administration and 3+ years of experience in MongoDB administration. Extensive hands-on experience with AWS cloud services (e.g., RDS, EC2, S3, VPC, IAM). Proficiency in T-SQL programming and query optimization. Strong understanding of SQL Server HA/DR configurations in AWS (Multi-AZ, Read Replicas). Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or third-party solutions. Knowledge of cloud cost management and database scaling strategies. Familiarity with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Strong scripting skills with PowerShell, Python, or similar languages. Preferred Skills and Certifications Knowledge of database migration tools like AWS DMS or native backup/restore processes for cloud migrations. Understanding of AWS security best practices and tools such as KMS, GuardDuty, and AWS Config. Certifications such as AWS Certified Solutions Architect, AWS Certified Database Specialty, or Microsoft Certified: Azure Database Administrator Associate. Educational Qualification Bachelors degree in computer science, Information Technology, or a related field. Skills PRIMARY COMPETENCY : Data Engineering PRIMARY SKILL : Microsoft SQL Server APPS DBA PRIMARY SKILL PERCENTAGE : 70 SECONDARY COMPETENCY : Data Engineering SECONDARY SKILL : MongoDB APPS DBA SECONDARY SKILL PERCENTAGE : 30
Posted 3 months ago
4 - 7 years
12 - 17 Lacs
Hyderabad
Work from Office
What youll be doing... Verizon is looking for a dynamic and talented individual for the PQC Team. The Post-Quantum Cryptography (PQC) project focuses on securing organizational data against emerging quantum threats by identifying security vulnerabilities and enhancing data protection at the IP level. The team will assess and address potential weaknesses, ensuring resilience against quantum computing-based attacks. By enriching data from every IP within the organization, the project strengthens threat intelligence and fortifies security protocols, enabling a future-proof defense against evolving cyber risks. Design, develop, and maintain front-end applications using Angular. Build and optimize back-end services with Node.js. Work with SQL databases to design schemas, optimize queries, and ensure data integrity. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud solutions, preferably using AWS, to enhance scalability and reliability. Ensure best practices in coding, security, and performance optimization. Debug and resolve technical issues while ensuring a seamless user experience. Stay up to date with industry trends and emerging technologies. What we are looking for Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of experience in full-stack development. Strong proficiency in Angular (latest versions) for front-end development. Hands-on experience with Node.js for back-end development. Expertise in SQL databases (MySQL, PostgreSQL, or SQL Server). Understanding of cloud technologies, preferably AWS (EC2, S3, Lambda, RDS, etc.). Experience with RESTful APIs and microservices architecture. Strong problem-solving skills and ability to work independently or in a team. Good understanding of DevOps, CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus. Even better if you have one or more of the following: AWS Certification
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
What youll be doing... Verizon is looking for a dynamic and talented individual for the PQC Team. The Post-Quantum Cryptography (PQC) project focuses on securing organizational data against emerging quantum threats by identifying security vulnerabilities and enhancing data protection at the IP level. The team will assess and address potential weaknesses, ensuring resilience against quantum computing-based attacks. By enriching data from every IP within the organization, the project strengthens threat intelligence and fortifies security protocols, enabling a future-proof defense against evolving cyber risks.. Design, develop, and maintain front-end applications using Angular. Build and optimize back-end services with Node.js. Work with SQL databases to design schemas, optimize queries, and ensure data integrity. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud solutions, preferably using AWS, to enhance scalability and reliability. Ensure best practices in coding, security, and performance optimization. Debug and resolve technical issues while ensuring a seamless user experience. Stay up to date with industry trends and emerging technologies. What were looking for.. Youll need to have: Bachelors degree or four or more years of work experience. Four or more years of experience in full-stack development. Strong proficiency in Angular (latest versions) for front-end development. Hands-on experience with Node.js for back-end development. Expertise in SQL databases (MySQL, PostgreSQL, or SQL Server). Understanding of cloud technologies, preferably AWS (EC2, S3, Lambda, RDS, etc.). Experience with RESTful APIs and microservices architecture. Strong problem-solving skills and ability to work independently or in a team. Good understanding of DevOps, CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus. Even better if you have one or more of the following: AWS Certification
Posted 3 months ago
5 - 8 years
12 - 15 Lacs
Delhi, Mumbai, Kolkata
Work from Office
We are seeking an experienced AWS Cloud and DevOps Engineer to join our dynamic team. The ideal candidate should have a strong background in cloud technologies, particularly AWS, and possess a deep understanding of DevOps practices. As a Cloud and DevOps Engineer, you will be responsible for designing, implementing, and maintaining our cloud infrastructure and ensuring smooth deployment and operation of our applications. If you are passionate about cutting-edge cloud technologies and automation and have a proven track record of driving DevOps practices, we would love to hear from you. Job Responsibilities: Design, deploy, and manage scalable and highly available AWS cloud infrastructure to support our applications and services. Collaborate with development teams to define and implement CI/CD pipelines to enable automated application deployment and release management. Develop and maintain automated monitoring, alerting, and logging systems to ensure the health and performance of our cloud environment. Implement security best practices and ensure the security and compliance of our cloud infrastructure and applications. Troubleshoot and resolve infrastructure issues, performance bottlenecks, and application-related incidents. Collaborate with cross-functional teams to optimize the performance and cost-efficiency of our cloud services. Stay up to date with the latest AWS services [EC2, ELB, Autoscaling Groups, CloudFront, S3, AWS Lambda, Jenkins, Github, RDS, Migration], features, and best practices and evaluate their potential impact on our infrastructure and processes. Participate in on-call rotation to provide support for critical infrastructure issues. Document and maintain comprehensive documentation for system configurations, procedures, and troubleshooting guides. Provide 24x7 on-call support for critical infrastructure issues and participate in incident response and resolution efforts. Requirements: Bachelors degree in computer science, Information Technology, or related field, or equivalent experience. 5-8 years of hands-on experience working as a Cloud Engineer, DevOps Engineer, or similar role. Extensive experience with AWS services such as EC2, S3, Lambda, RDS, VPC, and IAM. Proficiency in scripting languages such as Python, Bash, or PowerShell for automation and infrastructure as code. Strong experience in building and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS Code Pipeline. In-depth knowledge of containerization technologies like Docker and container orchestration platforms like Kubernetes. Solid understanding of infrastructure-as-code tools like Terraform or AWS CloudFormation. Familiarity with configuration management tools like Ansible, Puppet, or Chef. Experience with logging and monitoring tools such as CloudWatch, ELK Stack, or Prometheus/Grafana. Strong understanding of networking concepts, including TCP/IP, DNS, load balancing, and firewalls. Knowledge of security best practices and experience implementing security controls in AWS environments. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Preferred Qualifications: AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified SysOps Administrator. Experience with other cloud platforms like Microsoft Azure or Oracle Cloud Platform. Familiarity with serverless computing and event-driven architectures. Previous experience in a software development role or exposure to software development practices. Understanding of Agile and DevOps methodologies. Location: Remote- Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 3 months ago
2 - 7 years
5 - 15 Lacs
Bengaluru
Hybrid
Skill Set - Onprem MySQL DB Administration and AWS Cloud service - RDS Key Responsibilities: Provide L2 support for MySQL database management, ensuring high availability and performance. Perform database performance optimizations to enhance application responsiveness and efficiency. Manage and troubleshoot table joints and triggers to ensure data integrity and optimized query performance. Implement and maintain recommended backup policies to safeguard data, including regular snapshots and restoration processes. Monitor and address query execution timeout issues and optimize queries for improved performance. Analyze and mitigate asynchronous replication lag and configure both asynchronous and synchronous replication setups. Conduct kernel patchings and database patchings to ensure the system is secure and up to date. Optimize server configurations including CPU, memory, file system, and NAS settings to improve database performance. Assist in migrations and upgradations of databases with minimal downtime and data loss. Create and manage snapshots on Amazon RDS and restore instances/databases using snapshot restore techniques. Utilize S3 buckets for efficient storage and backup solutions, ensuring proper data access and retrieval. Implement tags creation for better resource management and cost tracking in cloud environments. Manage SSL certificate renewals to maintain secure database connections. Qualifications: Proven experience as a MySQL DBA or in a similar support role. Experience with cloud services, particularly AWS RDS . Strong understanding of database concepts, including performance tuning, replication, and backup strategies. Proficiency in MySQL server configurations and optimizations. Familiarity with scripting languages for automation and system management. Excellent problem-solving skills and ability to work under pressure.
Posted 3 months ago
6 - 11 years
10 - 12 Lacs
Pune
Work from Office
We are looking for highly skilled Data Engineers to join our team for a long-term offshore position. The ideal candidates will have 5+ years of experience in Data Engineering, with a strong focus on Python and programming. The role requires proficiency in leveraging AWS services to build efficient, cost-effective datasets that support Business Reporting and AI/ML Exploration. Candidates must demonstrate the ability to functionally understand the Client Requirements and deliver Optimized Datasets for multiple Downstream Applications. The selected individuals will work under the guidance of an Lead from Onsite and closely with Client Stakeholders to meet business objectives. Key Responsibilities Cloud Infrastructure: Design and implement scalable, cost-effective data pipelines on the AWS platform using services like S3, Athena, Glue, RDS, etc. Manage and optimize data storage strategies for efficient retrieval and integration with other applications. Support the ingestion and transformation of large datasets for reporting and analytics. Tooling and Automation: Develop and maintain automation scripts using Python to streamline data processing workflows. Integrate tools and frameworks like PySpark to optimize performance and resource utilization. Implement monitoring and error-handling mechanisms to ensure reliability and scalability. Collaboration and Communication: Work closely with the onsite lead and client teams to gather and understand functional requirements. Collaborate with business stakeholders and the Data Science team to provide datasets suitable for reporting and AI/ML exploration. Document processes, provide regular updates, and ensure transparency in deliverables. Data Analysis and Reporting: Optimize AWS service utilization to maintain cost-efficiency while meeting performance requirements. Provide insights on data usage trends and support the development of reporting dashboards for cloud costs. Security and Compliance: Ensure secure handling of sensitive data with encryption (e.g., AES-256, TLS) and role-based access control using AWS IAM. Maintain compliance with organizational and industry regulations. Required Skills: 5+ years of experience in Data Engineering with a strong emphasis on AWS platforms. Hands-on expertise with AWS services such as S3, Glue, Athena, RDS, etc. Proficiency in Python and for building Data Pipelines for ingesting data and integrating it across applications. Demonstrated ability to design and develop scalable Data Pipelines and Workflows. Strong problem-solving skills and the ability to troubleshoot complex data issues. Preferred Skills: Experience with Big Data technologies, including Spark, Kafka, and Scala, for Distributed Data processing. Hands-on expertise in working with AWS Big Data services such as EMR, DynamoDB, Athena, Glue, and MSK (Managed Streaming for Kafka). Familiarity with on-premises Big Data platforms and tools for Data Processing and Streaming. Proficiency in scheduling data workflows using Apache Airflow or similar orchestration tools like One Automation, Control-M, etc. Strong understanding of DevOps practices, including CI/CD pipelines and Automation Tools. Prior experience in the Telecommunications Domain, with a focus on large-scale data systems and workflows. AWS certifications (e.g., Solutions Architect, Data Analytics Specialty) are a plus.
Posted 3 months ago
7 - 9 years
0 Lacs
Pune
Hybrid
In-depth knowledge of AWS services including EC2, S3, RDS, Lambda, ACM, SSM, and IAM. Experience with Kubernetes (EKS) and Elastic Container Services (ECS)for orchestration and deployment of microservices. Engineers are expected to be able to execute upgrades independently. Cloud Architecture : Proficient knowledge on AWS advanced networking services including CloudFront, Transit Gateway Monitoring & Logging: Knowledge of AWS CloudWatch, CloudTrail, OpenSearch and Grafana monitoring tools. Security Best Practices : Understanding of AWS security features and compliance standards. API: RestAPI/OneAPI Relevant experience mandatory Infrastructure as Code (IaC): Proficient in AWS CloudFormation and Terraform for automated provisioning. Scripting Languages: Proficient in common languages (PowerShell, Python and Bash) for automation tasks. CI/CD Pipelines: Familiar with tools like Azure DevOps Pipelines for automated testing and deployment. Relevant Experience - A minimum of 4-5 years experience in a comparable Cloud Engineer Role Nice to Have: Knowledge/Hands-On Azure services Agile Frameworks: Proficient knowledge about Agile ways of working (SCRUM, SAFe) Certification In case of AWS at least:: Certified Cloud Practitioner + Certified Solutions Architect Associate + Certified Solution Architect Professional. In case of Azure at least: Microsoft Certified: Azure Solutions Architect Expert Mindset: Platform engineers must focus on automating activities where possible, to ensure stability, reliability and predictability.
Posted 3 months ago
10 - 15 years
50 - 60 Lacs
Hyderabad
Remote
Principal Engineer (WFH) Experience: 8 - 15 Years Salary: INR 50,00,000-60,00,000 / year Preferred Notice Period : within 15 days Shift : 10:00 AM to 7:00 PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Partners) What do you need for this opportunity? Must have skills required : Data structure and algorithm, Go, Microservices Architecture, AWS, JavaScript, Python, System Design, TypeScript Good to have skills : B2B, HRTECH, SaaS Our Hiring Partner is Looking for: Principal Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description What You'll Be Doing The company is looking to hire a Principal Engineer to provide hands-on technical leadership for the entire product engineering team operating in India. In this position, you'll work on leading the analytics, integrations, application platform, and product line of a complex enterprise SaaS product with your U.S. counterparts. Develop and enhance a complex enterprise performance management SAAS platform to drive critical decision-making in large enterprises. Architect, test, and implement solutions that elevate the company product to an enterprise level. Identify technology, process, and skill gaps and work with the U.S. and India heads of engineering to address them. Mentor a team of senior and staff engineers. Collaborate with a cross-functional team including engineering managers, product managers, designers, QA and other stakeholders to convert business requirements into product and technology outcomes. Introduce the team to new technologies and represent The company's technology via external events, open-source contributions, and blog posts. Participate in discussions with fully remote colleagues across multiple time zones (inclusive of Europe & US). What'll Help You Be Successful Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 8 to 15 years of experience in large-scale enterprise software development and architecture. A burning passion for technology, specifically technology in the service of business. A strong understanding of and respect for disciplines outside engineering - product, UX/UI design, sales, and marketing. Hands-on full-stack development experience with any high-level programming language. Python, Typescript, or Go are preferred. Expertise in AWS cloud, specifically EKS, RDS, MSK, and/or MWAA. Experience working with mid-sized teams of 50-100 engineers. Experience working with distributed computing and teams. Ability to define problems and resolve unknowns independently. Highly self-directed. Ability to communicate design and communicate large scale software architectures and guide teams into implementing them incrementally. Highly disciplined and self-motivated. What We All Do All employees share the responsibility of being aware of information security risks and adhering to information security policies and procedures. All employees are required to participate in information security awareness and training programs. All employees have a responsibility to handle data in accordance with data classification and handling guidelines. Employees should be aware of the sensitivity of the data they interact with and follow appropriate security measures. All employees have a responsibility of reporting information security incidents in accordance with information security policies and procedures. Life at The company At the company, we prioritize our people. In that spirit, we've put together a great benefits program to support our employees' health and wellness that includes the following: Work closely with a cross functional team of highly motivated and intelligent folks with a unique range of startup and enterprise experience. Balanced Work / Life with unlimited vacation. A vibrant company culture with frequent team building events. Competitive salary with stock options. Company sponsored health and personal accident insurance benefits. A remote first work culture that allows you to work from anywhere in India and travel to meet as a team when possible. A one-time reimbursement for work from the home office set up. A monthly stipend for the internet. Engagement Type Direct-Hire on Clients payroll Job Type - Permanent Location- Remote Work Timings- 10 AM-7 PM IST (Flexible shift timings) How to apply for this opportunity Register or log in on our portal Click 'Apply,' upload your resume, and fill in the required details. Post this, click Apply Now' to submit your application. Get matched and crack a quick interview with our hiring partner. Land your global dream job and get your exciting career started! About Our Hiring Partner: The company provides enterprise software to easily manage strategic plans, collaborative goals (OKRs), and ongoing performance conversations. Company mission is to build solutions that help companies execute their strategic objectives through people engagement, performance enablement and decision analytics. We are working with some of the worlds leading brands like Walmart and Intuit to disrupt the business and talent management spaces with next generation Strategic Execution and Performance Management solutions. About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. You will also be assigned to a dedicated Talent Success Coach during the engagement. ( Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 months ago
2 - 7 years
4 - 9 Lacs
Mumbai
Work from Office
- Project hands-on experience in AWS cloud services - Good knowledge of SQL and experience in working with databases like Oracle, MS SQL etc - Experience with AWS services such as S3, RDS, EMR, Redshift, Glue, Sagemaker, Dynamodb, Lambda,
Posted 3 months ago
5 - 8 years
30 - 35 Lacs
Gurgaon, Jaipur
Work from Office
An "Engineering Manager - .NET and AWS" is a senior lead role in software development, focusing on managing a team of engineers who specialize in building microservices and applications using the .NET technology stack and Amazon Web Services (AWS) cloud infrastructure. This role combines technical expertise with leadership and management responsibilities. Here are key responsibilities and skills associated with this role: Responsibilities: 1. Team Leadership: Lead and manage a team of engineers, providing guidance, coaching, and mentorship to help them meet their professional goals. 2. Project Management: Oversee project planning, execution, and delivery, ensuring that projects are completed on time and within budget. 3. Architecture and Design: Collaborate with the Solution Architect to define architecture, design patterns, and best practices for developing .NET-based microservices on AWS. 4. Microservices Development: Provide technical direction and expertise for developing microservices and APIs using .NET technologies, such as ASP.NET Core. 5. AWS Integration: Oversee the integration of AWS cloud services into the architecture, such as and Redis, Opensearch, AWS Event Hub, Amazon ECS, AWS Lambda, and other relevant AWS offerings. 6. Scalability and Performance: Ensure that applications and microservices are designed for scalability and optimized for performance by utilizing AWS auto- scaling and load balancing. 7. Security and Compliance: Implement security best practices and compliance standards within the microservices and AWS infrastructure. 8. Resource Management: Manage allocation of resources effectively, and make strategic decisions to optimize resource usage. 9. Stakeholder Communication: Communicate with business stakeholders, product managers, and cross-functional teams to align engineering efforts with business objectives. 10. Mentoring and Training: Foster a culture of continuous learning by providing training and development opportunities for team members. Skills and Qualifications: 1. .NET Stack: Proficiency in .NET technologies, particularly C#, ASP.NET Core, and ASP.NET REST API. 2. AWS Services: In-depth knowledge of AWS services and their use cases, including EC2, Lambda, API Gateway, RDS, DynamoDB, S3, Opensearch, Redis and more. 3. Microservices Architecture: Strong understanding of microservices (serverless) architecture, patterns, and best practices. 4. API Design: Expertise in designing RESTful APIs and maintaining API documentation. 5. Cloud Computing: A comprehensive understanding of cloud computing concepts and experience in AWS infrastructure management. 6. Security and Compliance: Knowledge of security best practices and compliance standards relevant to AWS environments. 7. Containerization: Familiarity with containerization technologies, Docker, and container orchestration using AWS ECS or EKS. 8. Project Management: Proficiency in project management methodologies and tools for effective project planning and execution. 9. Leadership and Communication: Strong leadership skills, excellent communication, and the ability to collaborate with cross-functional teams. 10. Agile Methodology: Ability to work in Agile development environments, leading Agile teams and adapting to changing requirements. Hands on exp .Net & AWS , Work from Office Only, Immediate Joiner or Early Joiner Only
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Hyderabad
Work from Office
SRE Engineer The SRE Engineer works with various areas of the business to collaborate on an infrastructure strategy that is secure, scalable, high performance and aligned with the goal of continuous integration and continuous deployment. This team member is dedicated to Security projects and is responsible for helping build the standards for infrastructure, deployment, and security implementations with a keen eye toward the future state of technology and the industry. He/She reports directly to the CISO, and works closely with Senior DevOps engineers and other Cloud Operation teams to build the frameworks that are adopted for future projects and processes, specifically as it relates to security. This team member is future-focused, capable of moving quickly and taking risks, and challenging the status quo. Responsibilities Analyze current technology utilized within the company and develop steps and processes to improve and expand upon them Provide clear goals for all areas of a project and develop steps to oversee their timely execution Work closely with development teams within the company to maintain hardware and software needed for projects to be completed efficiently Participate in a constant feedback loop among the community of Cloud Operation teams and enterprise architecture teams Work with software development teams to engineer and implement infrastructure solutions, including infrastructure automation and CI/CD pipelines Provide evangelism for cutting-edge, sustainable automation in continuously integrating and deploying to multiple environments Requirements Ability to build integrations between applications using an Application Programming Interface (API) 4 years of recent experience with cloud platforms such as AWS or Microsoft Azure (AWS preferred) Some recent experience with infrastructure as code. (Terraform, CloudFormation, or AWS CDK preferred) Demonstrate ability to leverage scripting languages such as PowerShell and Bash to automate processes. Other coding languages a plus Some software development experience preferred, including UI, database, and backend systems. General understanding of tools, applications and architectural patterns associated with CI/CD and cloud development Strong understanding of security tenets Ability to think analytically and advocate for creative solutions Ability to work collaboratively with members of other teams Excellent written and verbal communication skills Delivery experience as a software engineer for on-premises or cloud applications. Strong knowledge of AWS services including EC2, ECS, VPC, IAM, Control Tower, CloudFormation, Organizations, Systems Manager, AWS Backup, AWS Instance Scheduler, ELB, and RDS. Hands-on experience with AWS provisioning and infrastructure automation using Terraform and CloudFormation templates/stacks. Experience configuring and managing SSO roles in AWS. Proficient in Python, Shell, YAML, and PowerShell scripting. Extensive hands-on experience in automation and streamlining processes. Skilled in vulnerability remediation and security best practices. Experience with ECS repositories and GitHub for version control and CI/CD pipelines. Proficient in Windows and Linux server administration. Experience with Active Directory (AD) and RBAC (Role-Based Access Control) roles. Hands-on experience with monitoring tools like CloudWatch, Nagios, Observium, and Kibana Logs. Good knowledge of VMware and virtual machines. Experience in OS patching, upgrades, and configuring Stunnel or site-to-site VPN tunnels. Strong understanding of change management processes, SLAs, and tools like Jira (Kanban Boards and Sprints). Knowledge of implementing High Availability (HA), Fault Tolerance (FT), and Disaster Recovery (DR) strategies in the cloud.
Posted 3 months ago
5 - 7 years
20 - 25 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Responsibilities: Design and Development: Develop robust, scalable, and maintainable backend services using Python frameworks like Django, Flask, and FastAPI. Cloud Infrastructure: Work with AWS services (e.g., Cloudwatch, S3, RDS, Neptune, Lambda, ECS) to deploy, manage, and optimize our cloud infrastructure. Software Architecture:? Participate in defining and implementing software architecture best practices, including design patterns, coding standards, and testing methodologies. Database Management:? Proficiently work with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune) to design and optimize data models and queries.? Experience with ORM tools. Automation: Design, develop, and maintain automation scripts (primarily in Python) for various tasks, including: Data updates and processing. Scheduling cron jobs. Integrating with communication platforms like Slack and Microsoft Teams for notifications and updates. Implementing business logic through automated scripts. Monitoring and Logging: Implement and manage monitoring and logging solutions using tools like ELK stack (Elasticsearch, Logstash, Kibana) and AWS CloudWatch. Production Support:? Participate in on-call rotations and provide support for production systems, troubleshooting issues and implementing fixes.? Proactively identify and address potential production issues. Team Leadership and Mentorship: Lead and mentor junior backend developers, providing technical guidance, code reviews, and support their professional growth. Required Skills and Experience: 5+ years of experience in backend software development. Strong proficiency in Python and at least two of the following frameworks: Django, Flask, FastAPI. Hands-on experience with AWS cloud services, including ECS. Experience with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., DynamoDB, Neptune). Strong experience with monitoring and logging tools, specifically ELK stack and AWS CloudWatch. Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Work Timings: 2:30PM-11:30PM(Monday-Friday)
Posted 3 months ago
5 - 10 years
8 - 13 Lacs
Chennai
Work from Office
AWS Lambda Create lambda function with all the security in place. Proficiency in Node JS (should have developed services, developed unit and integration testing). Strong notions of security best practices (Eg. using IAM Roles, KMS, Pseudonymisation etc.). Swagger hub defined the services on swagger hub. Serverless approaches using AWS Lambda. For example, the Serverless Application Model (AWS SAM). Must have the hands on experience of RDS, Kafka, ELB, Secret Manager, S3, API Gateway, Cloud watch and Event Bridge services. Knowledge of writing Unit test cases using Mocha framework. Must have the knowledge of Encryption & Decryption of PII data and file on Transit and at Rest. Should have the knowledge of CDK (Cloud Development Kit) Knowledge of creating SQS/SNS, DynamoDB, API Gateway using CDK. Serverless stack Lambda API Gateway Step functions Coding in Node JS. Must have good analytical and problem-solving skills and ability to troubleshoot and solve problems. CDK Advanced networking concepts Transit Gateway VPC endpoints Multi account connectivity.
Posted 3 months ago
8 - 13 years
25 - 30 Lacs
Hyderabad
Work from Office
Primary Skills:- Backend Tech: Java, Spring, REST services, Spring JPA AWS Cloud knowledge: Lambda, API g/w, Step Fn, SQS, SNS, S3, Cloudwatch Database: Mongo DB, RDS Secondary skills:- Frontend Tech: React, Node js Backend Tech:- Python, Message Queue. Should have good experince working in agile team Should have decent experience in designing/Architecture Should be able to explore new services on AWS and do POCs Should be able to contribute in stand alone roles.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2