Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You should have experience with Data Transfer tools and methods, including the ability to create and process change requests for new and existing clients. This includes expertise in GlobalScape, NDM (including Secure+), AWS S3/AWS CLI, GCP, Azure Blob, WinSCP/Putty. Additionally, you should have proficiency in certificate management and application license management. Experience with server patching, maintenance, vulnerability remediation, and server monitoring is essential. Familiarity with system diagnostic tools and maintenance reports such as Rapid7 and Brinqa is required. You should also possess expertise in file server management, Active Directory, and DNS management. Extensive knowledge of AWS services is a must, including VPC, EC2 EMR, S3 Fargate, Load balancers, EFX, EBS, AWS Workspaces. You should be able to install and configure software according to organizational guidelines and plans, including system configuration and default user settings. Managing server access requests, system accounts, password management, Instance/EBS snapshots, server decommissions, and change control processes will be part of your responsibilities. You should also have experience in setting up and configuring user tools like Dbeaver, Excel Macro functionality, Vedit, and troubleshooting any related issues. This position was posted by Hymavati Sarojini from Softility.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You will collaborate with our SREs and Product Engineers to deliver reliability services and enhance our Tooling platforms by leveraging automated code development and continuous integration/continuous deployment principles. Your role involves gaining proficiency in various technology domains such as cloud, SCM, CI/CD, Network, Automation, and Monitoring/logging. Your primary responsibility will be to enhance the reliability, security, and automation of our Tooling platforms to ensure a seamless experience for our user community. This position necessitates providing support and being on-call to manage incidents, changes, and problem resolution effectively. The ideal candidate should have a minimum of 5 to 9 years of experience in the field. It is highly preferred for this role to be based in Bangalore on-site in a hybrid mode, with on-call support required on weekends on a rotational basis. Key Requirements: - Extensive hands-on experience with AWS cloud platforms - Proficiency in key AWS services such as CloudFormation, KMS, S3, EC2, CloudWatch, IAM, and Code Commit - Familiarity with secrets management services in AWS and comprehension of other secrets management systems - Capable of analyzing logs and troubleshooting issues - Understanding of cost analysis and ability to optimize architecture - Proficiency in tools like Splunk, Dynatrace, GitHub, GA, Jfrog, and CircleCI (with administrative capabilities in tools) - Good grasp of GitHub and GitHub Action workflows - Ability to work with Docker and image artifactory via Jfrog - Hands-on experience with SCM and CI/CD principles - Proficiency in tools for Monitoring and Logging, such as Splunk and Dynatrace/New Relic, including building indexes and dashboards - Coding skills in Python and Terraform for debugging code and making necessary modifications - Strong documentation skills, ensuring thorough review and editing of documentation to enhance user and SRE documentation Education: - Bachelor's degree in Computer Science or a related field (Master's degree preferred) with 3-5 years of work experience.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You will be joining Birlasoft, a leading organization at the forefront of merging domain expertise, enterprise solutions, and digital technologies to redefine business outcomes. Emphasizing a consultative and design thinking approach, we drive societal progress by empowering customers to operate businesses with unparalleled efficiency and innovation. As a part of the esteemed multibillion-dollar CKA Birla Group, Birlasoft, comprising a dedicated team of over 12,500 professionals, is dedicated to upholding the Group's distinguished 162-year legacy. At our foundation, we prioritize Diversity, Equity, and Inclusion (DEI) practices, coupled with Corporate Sustainable Responsibility (CSR) initiatives, demonstrating our dedication to constructing not only businesses but also inclusive and sustainable communities. Come join us in shaping a future where technology seamlessly aligns with purpose. We are currently looking for a skilled and proactive StreamSets or Denodo Platform Administrator to manage and enhance our enterprise data engineering and analytics platforms. This position requires hands-on expertise in overseeing large-scale Snowflake data warehouses and StreamSets data pipelines, with a focus on robust troubleshooting, automation, and monitoring capabilities. The ideal candidate will ensure platform reliability, performance, security, and compliance while collaborating closely with various teams such as data engineers, DevOps, and support teams. The role will be based in Pune, Hyderabad, Noida, or Bengaluru, and requires a minimum of 5 years of experience. Key Requirements: - Bachelors or masters in computer science, IT, or related field (B.Tech. / MCA preferred). - Minimum of 3 years of hands-on experience in Snowflake administration. - 5+ years of experience managing StreamSets pipelines in enterprise-grade environments. - Strong familiarity with AWS services, particularly S3, IAM, Lambda, and EC2. - Working knowledge of ServiceNow, Jira, Git, Grafana, and Denodo. - Understanding of data modeling, ETL/ELT best practices, and modern data platform architectures. - Experience with DataOps, DevSecOps, and cloud-native deployment principles is advantageous. - Certification in Snowflake or AWS is highly desirable. If you possess the required qualifications and are passionate about leveraging your expertise in platform administration to drive impactful business outcomes, we invite you to apply and be part of our dynamic team at Birlasoft.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You should have a strong knowledge of AWS services including S3, AWS DMS (Database Migration Service), and AWS Redshift Serverless. Experience in setting up and managing data pipelines using AWS DMS is required. Proficiency in creating and managing data storage solutions using AWS S3 is a key aspect of this role. You should also be proficient in working with relational databases, particularly PostgreSQL, Microsoft SQL Server, and Oracle. Experience in setting up and managing data warehouses, particularly AWS Redshift Serverless, is important for this position. Your responsibilities will include utilizing analytical and problem-solving skills to analyze and interpret complex data sets. You should have experience in identifying and resolving data integration issues such as inconsistencies or discrepancies. Strong problem-solving skills are needed to troubleshoot and resolve data integration and migration issues effectively. Soft skills are also essential for this role. You should be able to work collaboratively with database administrators and other stakeholders to ensure integration solutions meet business requirements. Strong communication skills are required to document data integration processes, including data source definitions, data flow diagrams, and system interactions. Additionally, you should be able to participate in design reviews and provide input on data integration plans. A willingness to stay updated with the latest data integration tools and technologies and recommend upgrades when necessary is expected. Knowledge of data security and privacy regulations is crucial. Experience in ensuring adherence to data security and privacy standards during data integration processes is required. AWS certifications such as AWS Certified Solutions Architect or AWS Certified Database - Specialty are a plus for this position.,
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a Senior Backend Developer with 5-10 years of experience, you will be responsible for leveraging your extensive knowledge of AWS services and the Serverless.com framework using Node.js. Your expertise in AWS Cloud, including API Gateway, CloudWatch, S3, and Cognito, will be crucial in developing robust backend solutions. You should have a strong understanding of REST web services and practical experience with AWS Serverless using Node.js and MySQL. Your proficiency in SQL language will be essential for database operations. You should be proactive and demonstrate a willingness to exceed client expectations and achieve project objectives. Experience in SDLC processes, Agile Scrum Methodology, and version control tools like Git and Jenkins will be advantageous in this role. If you are ready to take on challenging projects, contribute to the development of scalable backend systems, and work collaboratively in a dynamic environment, this position in Coimbatore as a full-time Senior Backend Developer could be the perfect fit for you.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
The Debt Management Data Strategy Manager plays a crucial role in leading and executing data strategy for Retail Debt Management. The primary goal is to leverage both traditional and alternate data sources to drive impactful business outcomes, automate data-related processes, and implement innovative solutions to business challenges using data. In this role, you will act as the business owner responsible for evolving the data strategy for Debt Management from data ingestion to delivering insights. Collaboration with cross-functional teams such as data engineering, data solutions, data governance, data platform, and IT is essential to ensure the optimal implementation of data processes and automations for efficient debt management operations. You will be expected to conceptualize and implement data-driven solutions that enhance collection efficiency, reduce non-performing assets (NPAs), and enable targeted strategies like Account Aggregator Strategy, Non-Contactable resolution, and Gen-AI solutions. Additionally, driving automation and documentation of recurring data processes and overseeing the Debt Management AWS account for cost optimization, governance, management, and budgets are key responsibilities of this role. The ideal candidate should possess a Bachelor's degree in Engineering, Statistics, Economics, Mathematics, or a related field. A minimum of 7 years of experience in data management, data analysis, preferably in debt management, lending, or financial services is required. Proficiency in AWS data tools (Athena, S3, EMR, PySpark), strong coding skills in PySpark/Python/SQL, excellent analytical and problem-solving abilities, and the capacity to work both independently and collaboratively in a team environment are essential. Exceptional communication and interpersonal skills are also necessary for effective performance in this role.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
About Us: Fission Labs is a leading software development company headquartered in Sunnyvale, with offices in Dallas and Hyderabad. We specialize in crafting flexible, agile, and scalable solutions that drive businesses forward. Our comprehensive services include product development, cloud engineering, big data analytics, QA, DevOps consulting, and AI/ML solutions, empowering clients to achieve sustainable digital transformation aligned with their business goals. Fission Labs Website: https://www.fissionlabs.com/ Work Location: Hyderabad Notice Period: Immediate to 30 Days Role Overview: Omada is dedicated to developing next-gen intelligent systems that seamlessly integrate real-time APIs, cloud-native infrastructure, and external AI capabilities. We are seeking a talented Python Engineer with expertise in FastAPI, AWS, and practical experience in utilizing GenAI APIs and data pipelines. Key Responsibilities: Backend & API Development - Design, develop, and maintain robust REST APIs using FastAPI and Python. - Construct scalable microservices that interface with AWS services such as Lambda, EC2, EKS, API Gateway, DynamoDB, and S3. - Implement workflow automation and event-driven pipelines employing tools like Step Functions, SQS, and SNS. - Create real-time and streaming APIs using WebSockets or Kinesis as needed. - Integrate with external GenAI APIs including OpenAI (ChatGPT APIs), Google Gemini APIs, and other third-party AI/ML APIs or services. - Design and execute web crawlers or integrate with crawling frameworks/tools to extract and process structured/unstructured data. Required Skills: - 7-9 years of backend development experience with a strong proficiency in Python. - Demonstrated production-level experience utilizing FastAPI. - Extensive expertise in AWS services, particularly Lambda, EC2, EKS, API Gateway, Step Functions, DynamoDB, S3, and SNS/SQS. - Hands-on experience in calling and managing responses from ChatGPT APIs (OpenAI) and Google Gemini APIs. - Familiarity with writing or integrating web crawlers (e.g., BeautifulSoup, Playwright, Scrapy). - Proficiency in Git and GitHub, encompassing branching strategies, pull requests, and code reviews. - Ability to work independently in a dynamic startup environment. - Prior experience working on Chat Agents. Preferred Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - Familiarity with NoSQL and relational databases (DynamoDB, PostgreSQL, etc.). - Experience in CI/CD workflows, Docker, and Kubernetes. - Bonus: Exposure to distributed data processing frameworks like Apache Beam or Spark. - Bonus: Previous experience integrating with external data and media APIs. Why Join Omada: - Contribute to building API-first systems integrated with cutting-edge AI and cloud technologies. - Shape scalable, real-time backend architecture in a greenfield product. - Collaborate with a modern Python + AWS + GenAI stack.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Job Description: We are seeking a Senior Data Engineer with over 5 years of experience in Python, SQL, PySpark, and various AWS services such as Lambda, Glue Jobs, and Athena. This position is located in Hyderabad and necessitates on-site work. As a Senior Data Engineer, you will be tasked with constructing robust data pipelines, creating scalable ETL solutions, and overseeing cloud-based data systems. Key Responsibilities: - Designing, developing, and enhancing large-scale ETL/ELT pipelines utilizing PySpark and AWS Glue. - Deploying serverless workflows using AWS Lambda, Athena, and S3 for data processing and querying purposes. - Crafting efficient and scalable SQL queries to handle data in Redshift, Athena, and other data platforms. - Collaborating with diverse teams to comprehend business data requirements and devise suitable solutions. - Establishing and managing infrastructure using CloudFormation or similar Infrastructure-as-Code tools. - Integrating APIs and overseeing message queues utilizing SNS and SQS for both real-time and batch processing requirements. - Identifying, monitoring, and improving the performance of data pipelines and cloud solutions. Required Skills: - Over 5 years of practical experience in: - Python and SQL - PySpark - AWS Glue Jobs, AWS Lambda, Athena - Profound familiarity with AWS Redshift, S3, CloudFormation, SNS, and SQS. - Sound comprehension of data modeling, ETL frameworks, and distributed data systems. - Proficiency in API integrations and consuming RESTful services.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled Full Stack Developer with 4-6 years of hands-on experience, proficient in React.js for front-end development and Node.js for back-end development. Your strong backend experience includes RESTful API development and familiarity with AWS Lambda, API Gateway, DynamoDB, and S3 among other AWS services. You have prior experience integrating and automating workflows for SDLC tools such as JIRA, Jenkins, GitLab, Bitbucket, GitHub, and SonarQube. Your understanding of OAuth2, SSO, and API key-based authentications is solid. Additionally, you are familiar with CI/CD pipelines, microservices, and event-driven architectures. Your knowledge of Git and modern development practices is strong, and you possess good problem-solving skills enabling you to work independently. Experience with Infrastructure-as-Code tools like Terraform or CloudFormation is a plus. It would be beneficial if you have experience with AWS EventBridge, Step Functions, or other serverless orchestration tools, as well as knowledge of enterprise-grade authentication methods such as LDAP, SAML, or Okta. Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog will also be advantageous in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You should be a skilled .NET Developer with expertise in C#, .NET (Framework/Core), and AWS cloud services. Your primary responsibility will be to develop, deploy, and maintain scalable web applications and microservices in a cloud environment. You will collaborate with various teams to ensure the success of the projects. Your tasks will involve designing, developing, and maintaining backend systems using C# and .NET Core/.NET Framework. You will also be responsible for integrating RESTful APIs and web services, as well as deploying and managing applications on AWS using services like EC2, Lambda, S3, RDS, and API Gateway. It will be essential for you to write clean and maintainable code, following coding standards and best practices. Collaborating with DevOps engineers for CI/CD pipelines and infrastructure automation will be a part of your role. Furthermore, you will work closely with frontend developers, QA engineers, and product managers to ensure the smooth functioning of applications. Troubleshooting, debugging, and optimizing applications for performance and scalability will also be within your responsibilities. Ensuring application security and compliance in a cloud-native environment will be crucial for the success of the projects. As for the requirements, you must be proficient in C# and the .NET ecosystem, including ASP.NET Core, Web API, and MVC. A strong understanding of Object-Oriented Programming (OOP) and design patterns is essential. Hands-on experience with AWS Cloud Services such as EC2, Lambda, S3, RDS, and CloudWatch will be necessary. Experience with Entity Framework / EF Core and SQL Server is also required. Familiarity with Git, CI/CD tools, and automated deployments on AWS is preferred. Additionally, an understanding of Docker, containers, or serverless architecture would be beneficial. (ref:hirist.tech),
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
You should have a minimum of 5+ years of experience in AWS services like S3, Lambdas, CloudWatch, SNS, SQS, as well as in programming and development of cloud-native serverless applications in Python. Proficiency in SQL, CloudFormation (CFT), and CICD deployment is also required. The job location can be in Mumbai, Pune, Hyderabad, Chennai, Bangalore, Noida, or Kolkata. The shift timings for this position are from 2 PM to 11 PM. If you are interested in this opportunity, please send your updated resume to meeta.padaya@ltimindtree.com and include the following details: - Total Experience - Relevant Experience in AWS PAAS Services - Python development skills - Availability for a 2nd F2F round (1st round will be Virtual) - Willingness to work in the shift from 2 PM to 11 PM (Yes/No) - Current Company - Current CTC - Expected CTC - Notice Period - Current / Preferred Location Join us if you are passionate about working with AWS services, cloud-native applications, and have a strong background in Python programming.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a DevOps Lead at our cloud-native services company, you will collaborate with the technical leadership to devise the organization's cloud strategy for enhancing business agility and providing cutting-edge delivery solutions. Your role will involve demonstrating exceptional leadership and management skills to ensure top-notch service delivery, meeting contractual commitments, and surpassing targets. You will be responsible for leading and overseeing large enterprise-level projects involving intricate automation initiatives, ensuring project management aligns with timelines, costs, and quality standards. The ideal candidate for this position should possess a minimum of 3+ years of experience in both current and emerging DevOps technologies, and their practical application across diverse business verticals. Key responsibilities include hands-on involvement in Cloud assessment and migration, particularly in AWS infrastructure like EC2, RDS, Elastic cache, S3, Kafka/Kinesis, Lambda, among others. Proficiency in automation and configuration management tools such as Jenkins, Puppet, Chef, Ansible, or their equivalents is essential. Experience in App Containerization, Docker/Kubernetes/EKS implementation on AWS, and familiarity with Application Performance Management (APM) tools like New Relic, Datadog, or similar log management tools is highly valued. The successful candidate should have a strong aptitude for System Architecture Process Design and Implementation, along with excellent influencing and relationship-building capabilities with stakeholders across Service Lines, Platform Managers, Business team, and External Suppliers. Effective written and oral communication skills are critical for this role, ensuring seamless interaction with various teams and stakeholders. This position offers a competitive package up to 10 LPA, in line with market standards. Join our dynamic team and contribute to driving cloud transformation, DevOps automation, managed services, and cloud-native application development for clients worldwide.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Join a compact and innovative AI team at Check Point, working side-by-side with AI developers and product managers. You will be part of a team that is focused on building sleek, intuitive interfaces and full-stack tools to power cutting-edge AI solutions used internally across the organization. Your major responsibilities will include designing and developing web-based UIs that seamlessly connect with backend AI engines and databases. You will also be responsible for building and maintaining RESTful APIs to integrate user interfaces with AI systems and internal data sources. Creating scalable, responsive, and reusable components using React and Tailwind CSS will be a key part of your role. Additionally, you will implement backend services in Node.js or Python (Flask/FastAPI) that interface with AI APIs, RAG modules, and authentication layers. Ensuring UI/UX excellence by aligning with design standards, accessibility, and user feedback will be crucial. You will also be involved in deploying and monitoring applications using Docker, Kubernetes, and CI/CD pipelines. Handling permissions, user access control, and secure data flows for sensitive use cases will be an important aspect of your responsibilities. Collaboration with AI developers to visualize AI results and workflows clearly and effectively is also expected. The desired background for this role includes proficiency in modern frontend development, with knowledge of React, TypeScript, Tailwind, Redux, and Next.js being advantageous. Strong backend experience with at least 4 years in Object-Oriented Programming languages such as Java, C#, and REST API design is required. Experience with OAuth2, SAML, and JWT authentication flows is also desired. Familiarity with SQL and NoSQL databases like PostgreSQL, MongoDB, and Redis is important. Experience working with cloud services such as AWS Lambda, S3, and GCP Firestore is a plus. Knowledge of logging and observability tools like Datadog, Grafana, and ELK is beneficial. Bonus points if you have experience with admin dashboards, internal portals, or AI UI components like chat-like interfaces and graph viewers. Being detail-oriented with a strong understanding of usability and performance is key for success in this role.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have expertise in AWS services like EC2, CloudFormation, S3, IAM, SNS, SQS, EMR, Athena, Glue, lake formation, etc. Additionally, you should possess proficiency in Hadoop/EMR/DataBricks with strong debugging skills to resolve hive and spark related issues. It is essential to have a solid understanding of database concepts and experience working with relational or non-relational database types such as SQL, Key-Value, Graphs, etc. Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc., and proficiency in programming languages such as Python/PySpark is required. Effective written and verbal communication skills are also crucial for this role. As part of your key responsibilities, you will work closely with Data lake engineers to provide technical guidance, consultation, and resolution of their queries. You will assist in the development of simple and advanced analytics best practices, processes, technology & solution patterns, and automation. Collaborating with various stakeholders in the US team with a collaborative approach is a key aspect of this role. You will be responsible for developing data pipelines in Python/PySpark to be executed in the AWS cloud, setting up analytics infrastructure in AWS using cloud formation templates, and developing mini/micro batch, streaming ingestion patterns using Kinesis/Kafka. Additionally, you will be involved in seamlessly upgrading applications to higher versions like Spark/EMR upgrade and participating in code reviews of developed modules and applications. Furthermore, you will provide inputs for the formulation of best practices for ETL processes/jobs written in programming languages such as PySpark and BI processes. Working with column-oriented data storage formats such as Parquet, interactive query service such as Athena, and event-driven computing cloud service - Lambda is also part of the responsibilities. Performing R&D on the latest Big data technologies in the market, conducting comparative analysis, and providing recommendations based on current and future enterprise needs are vital aspects of the role. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science or a similar field with 2-4 years of strong experience in big data development. Cloud certification (AWS, Azure, or GCP) is preferred. Join Ameriprise India LLP, a U.S.-based financial planning company with a global presence, providing client-based financial solutions for over 125 years. The company focuses on Asset Management and Advice, Retirement Planning, and Insurance Protection. If you are talented, driven, and looking to work for an ethical company that values contribution and community impact, take the next step and build a rewarding career at Ameriprise India LLP. This is a full-time position with working hours from 2:00 pm to 10:30 pm. The role falls under the Technology job family group within the AWMPO AWMP&S President's Office in the India Business Unit.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Data Lake Developer at BCSS, you will play a crucial role in the advancement and expansion of our advanced analytics practice. Your primary responsibility will involve designing and developing data lakes, managing data flows, and integrating information from diverse sources into a unified data lake platform through an ETL Tool. Furthermore, you will code and oversee delta lake implementations on S3 utilizing technologies such as Databricks or Apache Hoodie. It will be your responsibility to triage, debug, and resolve technical issues related to Data Lakes, as well as design and develop data warehouses for scalability. Evaluating data models, designing data access patterns, and coordinating with business and technical teams throughout the software development life cycle are also key aspects of this role. Additionally, you will participate in making significant technical and architectural decisions, as well as maintain and manage code repositories like Git. To excel in this role, you must possess at least 5 years of experience operating on AWS Cloud with a focus on building Data Lake architectures. You should also have a minimum of 3 years of experience with AWS Data services such as S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, and Redshift. In addition, you should have 3+ years of experience building Data Warehouses on platforms like Snowflake, Redshift, HANA, Teradata, Exasol, among others. Proficiency in Spark, building Delta Lakes using technologies like Apache Hoodie or Databricks, and working with ETL tools and technologies are essential requirements. Furthermore, you should have 3+ years of experience in at least one programming language (Python, R, Scala, Java) and hold a Bachelor's degree in computer science, information technology, data science, data analytics, or a related field. Experience with Agile projects and Agile methodology is highly beneficial for this role.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
gandhinagar, gujarat
On-site
As a Senior Node.js Developer with 57 years of experience, you will be an integral part of our development team located on-site at Infocity, Gandhinagar. Your primary responsibility will be to design, develop, and maintain RESTful APIs and backend services using Node.js. You will collaborate closely with front-end developers and product teams to deliver high-quality solutions and ensure the scalability and speed optimization of applications. Your expertise in backend development using Node.js will be crucial, along with a deep understanding of various database systems such as MongoDB, MySQL, and PostgreSQL. You will need to write clean, modular, and reusable code following best practices, conduct code reviews, and provide mentorship to junior developers. Identifying performance bottlenecks, suggesting optimizations, managing third-party integrations, and ensuring application security and data protection practices will also be part of your responsibilities. To excel in this role, you should be proficient in JavaScript (ES6+), Express.js, and asynchronous programming. Hands-on experience with ORMs and query builders, API security, authentication, and performance tuning is essential. Familiarity with version control systems, CI/CD pipelines, deployment and management of applications on AWS, containerization, and orchestration platforms will be beneficial. A Bachelor's degree in Computer Science, Information Technology, or a related technical field is required. Experience in Agile/Scrum development environments, strong analytical and debugging skills, excellent communication, and team collaboration abilities are also highly valued. Local candidates from Gandhinagar or Ahmedabad are preferred for this position.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Join Fortinet, a cybersecurity pioneer with over two decades of excellence, as you continue to shape the future of cybersecurity and redefine the intersection of networking and security. At Fortinet, the mission is to safeguard people, devices, and data everywhere. Fortinet is currently seeking a dynamic Staff Software Development Engineer to contribute to the success of the rapidly growing business. As a Staff Software Development Engineer at Fortinet, you will be responsible for designing and implementing core services and defining the architecture of the system. Fortinet is looking for a highly motivated individual who can thrive in a fast-paced environment and successfully contribute to the team. The ideal candidate will have a can-do attitude, passion for technology, extensive development experience, and the ability to learn quickly. In this role, you will be developing enterprise-grade backend components to enhance performance, responsiveness, server-side logic, and platform. You will need a good understanding of using technology in the right context with justified study backing the decision. Your responsibilities will include triaging, debugging, and ensuring timely resolution of software defects, as well as participating in functional spec, design, and code reviews. Following standard practices to develop and maintain application code, you will take an active role in reducing technical debt in various codebases. Additionally, you will develop high-quality, secure, scalable software solutions based on technical requirements specifications and design artifacts within expected time and budget. Fortinet is looking for candidates with 8-12 years of experience in Software Engineering, high-level expertise in Python programming and frameworks (Flask/FastAPI), and excellent knowledge of RDBMS (MySQL, PostgreSQL, etc.), MongoDB, Queueing systems, and ES Stack. Experience in building REST API-based microservices, strong data structures knowledge, and multi-threading/multi-processing programming skills are required. Candidates should also be experienced in building high-performing, distributed, scalable, enterprise-grade applications. Experience with AWS services (ECS, ELB, Lambda, SQS, VPC, EC2, IAM, S3, etc.), Docker, and Kubernetes expertise is preferred. Excellent problem-solving and troubleshooting skills are essential, along with the ability to communicate and discuss technical topics with both technical and business audiences. Candidates should be self-motivated with the ability to accomplish tasks with minimal direction. Experience with cybersecurity engineering is considered a plus. The team culture at Fortinet emphasizes collaboration, continuous improvement, customer-centricity, innovation, and accountability. By embedding these values into the ethos and culture, Fortinet creates a dynamic and supportive environment that drives excellence and innovation while maintaining a strong focus on customers" needs and satisfaction. Fortinet encourages candidates from all backgrounds and identities to apply. A supportive work environment and a competitive Total Rewards package are offered to support overall health and financial well-being. Embark on a challenging, enjoyable, and rewarding career journey with Fortinet and join in bringing solutions that make a meaningful and lasting impact on over 660,000 customers globally.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The ideal candidate will have expertise in storage systems including SAN/NAS, storage provisioning, backup solutions, and performance monitoring. Additionally, experience with AWS storage services such as S3, EBS, EFS, and Glacier, as well as basic cloud infrastructure management, is required. Preferred certifications include AWS Certified Solutions Architect - Associate or AWS Certified Cloud Practitioner. As a Storage Specialist, your responsibilities will include managing daily storage operations like provisioning, monitoring, and troubleshooting. You will also be expected to support AWS storage services by handling tasks such as bucket management, lifecycle policies, and cost optimization. Collaboration with senior engineers on complex projects such as storage migrations and disaster recovery planning is a key part of this role. Furthermore, documenting processes and contributing to knowledge-sharing within the team will be essential. Soft skills are highly valued, including the ability to work under the guidance of senior engineers, eagerness to learn, and strong problem-solving skills. This position requires a proactive approach to learning and adapting to new technologies and methodologies. If you have a graduation degree and possess the mentioned technical expertise and soft skills, we encourage you to apply for this challenging and rewarding position in our team.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Full Stack Developer, you will be responsible for utilizing your expertise in React.js, Node.js, and AWS Lambda to develop a custom enterprise platform that interacts with various SDLC tools. This platform aims to enhance tool administration, automate access provisioning and deprovisioning, manage licenses efficiently, and provide centralized dashboards for governance and monitoring purposes. With a minimum of 4-6 years of hands-on experience in Full Stack Development, you should possess a strong command over React.js for building component-based front-end architecture. Your backend skills in Node.js and proficiency in RESTful API development will be crucial for the success of this project. Additionally, your solid experience with AWS services such as Lambda, API Gateway, DynamoDB, and S3 will be highly valued. Your role will also involve integrating and automating workflows for SDLC tools like JIRA, Jenkins, GitLab, Bitbucket, GitHub, and SonarQube. A good understanding of OAuth2, SSO, and API key-based authentications is essential. Familiarity with CI/CD pipelines, microservices, and event-driven architectures will further enhance your contributions to the project. It is expected that you bring in-depth knowledge of Git and modern development practices to the table. Strong problem-solving skills and the ability to work independently are qualities that will be beneficial in this role. While not mandatory, experience with Infrastructure-as-Code tools like Terraform or CloudFormation would be advantageous. Familiarity with AWS EventBridge, Step Functions, or other serverless orchestration tools is considered a plus. Knowledge of enterprise-grade authentication methods such as LDAP, SAML, or Okta, as well as familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog, are also desirable skills. Join us in this exciting opportunity to work on a cutting-edge enterprise platform and contribute to streamlining processes and enhancing efficiency within the organization.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
At NiCE, we believe in challenging our limits and striving for excellence. If you are someone who is ambitious, a game-changer, and always plays to win, then we have the perfect career opportunity for you that will ignite your passion. We are currently seeking an experienced Software Engineer to join our dynamic team in Pune, India. In this role, you will collaborate with a team of highly skilled engineers to work on various applications and services that support our omni-channel, proactive communication platform. Your responsibilities will include writing, testing, and maintaining code that meets internal guidelines and industry best practices. You will be expected to deliver high-quality features independently, actively contribute to low-level design definitions, and ensure that applications are built to modern security standards. To be successful in this role, you should have 2 to 5 years of software engineering experience and strong proficiency in C#, including object-oriented programming and modern design patterns. Additionally, experience with .NET Core framework, relational databases (preferably MySQL), microservice architectural patterns, Docker, Kubernetes, Angular framework, and AWS services is required. You should also possess excellent communication skills and strong analytical abilities. Joining our team at NiCE means becoming part of a global company that values innovation and growth. We offer a fast-paced, collaborative, and creative work environment where you will have the opportunity to learn and develop your skills. With endless internal career opportunities available, you can explore various roles, domains, and locations within the organization. At NiCE, we follow the NICE-FLEX hybrid model, which allows for maximum flexibility in work arrangements. You will have the opportunity to work 2 days from the office and 3 days remotely each week, fostering teamwork, collaboration, and innovation. If you are passionate, innovative, and driven to raise the bar constantly, then you might be the perfect fit for our team at NiCE. Join us in shaping extraordinary customer experiences, combating financial crime, and ensuring public safety with our innovative software solutions. Become a part of NiCE, where excellence knows no bounds!,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Cloud Platform Engineer, you will play a crucial role in developing and maintaining Terraform modules and patterns for AWS and Azure. Your responsibilities will include creating platform landing zones, application landing zones, and deploying application infrastructure. Managing the lifecycle of these patterns will be a key aspect of your role, encompassing tasks such as releases, bug fixes, feature integrations, and updating test cases. You will be responsible for developing and releasing Terraform modules, landing zones, and patterns for both AWS and Azure platforms. Providing ongoing support for these patterns, including bug fixing and maintenance, will be essential. Additionally, you will need to integrate new features into existing patterns to enhance their functionality and ensure that updated and new patterns meet the current requirements. Updating and maintaining test cases for patterns will also be part of your responsibilities to guarantee reliability and performance. To qualify for this role, you should have at least 5 years of experience in AWS and Azure cloud migration. Proficiency in Cloud compute (such as EC2, EKS, Azure VM, AKS) and Storage (like s3, EBS, EFS, Azure Blob, Azure Managed Disks, Azure Files) is required. A strong knowledge of AWS and Azure cloud services, along with expertise in Terraform, is essential. Possessing AWS or Azure certification would be advantageous for this position. Key Qualifications: - 5+ years of AWS/Azure cloud migration experience - Proficiency in Cloud compute and Storage - Strong knowledge of AWS and Azure cloud services - Expertise in Terraform - AWS/Azure certification preferred Mandatory Skills: Cloud AWS DevOps (Minimum 5 Years of Migration Experience) Relevant Experience: 5-8 Years This is a Full-time, Permanent, or Contractual / Temporary job with a contract length of 12 months. Benefits: - Health insurance - Provident Fund Schedule: - Day shift, Monday to Friday, Morning shift Additional Information: - Performance bonus - Yearly bonus,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a skilled Full Stack Developer with a strong backend focus on Node.js and working knowledge of React.js. You will be responsible for developing a custom enterprise platform that interfaces with SDLC tools like JIRA, Jenkins, GitLab, and others. The platform aims to streamline license and access management, automate administrative tasks, and provide robust dashboards and governance features. With 4-6 years of professional development experience, you possess strong expertise in Node.js, including async patterns and performance tuning. You have hands-on experience with AWS Lambda and serverless architecture. Additionally, you have experience building integrations with tools like JIRA, Jenkins, GitLab, Bitbucket, etc. Your working knowledge of React.js for UI development and integration is essential, along with a solid understanding of RESTful APIs, Webhooks, and API security. Familiarity with Git and collaborative development workflows is also required, as well as exposure to CI/CD practices and infrastructure as code. It would be beneficial if you have experience with AWS services such as DynamoDB, S3, EventBridge, and Step Functions. Familiarity with enterprise SSO, OAuth2, or SAML is a plus, along with prior experience automating tool admin tasks and DevOps workflows. An understanding of modern monitoring/logging tools like CloudWatch or ELK would also be advantageous. In this role, you will have the opportunity to work on a transformative platform with direct enterprise impact. You will have the freedom to innovate and contribute to the automation of key IT and DevOps functions. Additionally, you will gain exposure to modern architectures, including serverless, microservices, and event-driven systems. The work culture is collaborative and outcome-oriented, providing a conducive environment for growth and learning.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
NTT DATA is looking for an AWS Devops Engineer to join their team in Pune, Maharashtra (IN-MH), India. As an AWS Devops Engineer, you will be responsible for building and maintaining a robust, scalable real-time data streaming platform leveraging AWS and Confluent Cloud Infrastructure. Your key responsibilities will include developing and building the platform, monitoring performance, collaborating with cross-functional teams, managing code using Git, applying Infrastructure as Code principles using Terraform, and implementing CI/CD practices using GitHub Actions. The ideal candidate must have a strong proficiency in AWS services such as IAM Roles, Access Control RBAC, S3, Lambda Functions, VPC, Security Groups, RDS, CloudWatch, and more. Hands-on experience in Kubernetes (EKS) and expertise in managing resources/services like Pods, Deployments, and Helm Charts is required. Additionally, expertise in Datadog, Docker, Python, Go, Git, Terraform, and CI/CD tools is essential. Understanding of security best practices and familiarity with tools like Snyk, Sonar Cloud, and Code Scene is also necessary. Nice-to-have skills include prior experience with streaming platforms like Apache Kafka, knowledge of unit testing around Kafka topics, and experience with Splunk integration for logging and monitoring. Familiarity with Software Development Life Cycle (SDLC) principles is a plus. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. NTT DATA offers diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is dedicated to providing digital and AI infrastructure and is part of the NTT Group, investing significantly in R&D to support organizations and society in moving confidently into the digital future. Visit us at us.nttdata.com.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
As a Java professional with over 7 years of experience in software design and development, you will be responsible for contributing to platform requirements and story development. Your role will involve designing, coding, and participating in code and design reviews, as well as developing use cases and conducting unit test cases as part of the continuous integration pipeline. Your technical skills should include proficiency in software design and development, with a strong background in Java, Spring Boot, and microservices. Experience in unit testing using Junit, as well as designing and architecting systems with high scalability and performance requirements, will be crucial. Knowledge of cloud SaaS technologies such as AWS SNS/SQS, RDS, Mongo, S3, and Elasticsearch is essential, along with Go lang experience. In addition to your technical expertise, excellent communication skills are necessary for effectively articulating technical challenges and solutions. You should be skilled in interfacing with both internal and external technical resources, adept at debugging problems, and capable of mentoring teams on the technical front. Key Requirements: - 7+ years of experience in Java, Spring Boot, and microservices - Ability to work independently and troubleshoot effectively - Self-driven with knowledge of cloud SaaS technologies (AWS SNS/SQS, RDS, Mongo, S3, Elasticsearch) - Nice to have: Golang experience Process Skills: - Proficiency in Agile, Scrum, and Test-Driven Development methodologies Qualifications & Experience: - Minimum of 7 years of experience - Graduate in any field Join us at Ara's Client, a leading IT solutions provider, and leverage your expertise to contribute to the development of innovative solutions using cutting-edge technologies.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
lucknow, uttar pradesh
On-site
You will be a part of an Indian / Global Digital Organization where you will play a crucial role in designing and implementing backend services that handle high-throughput and low-latency workloads. Your responsibilities will include architecting secure and observable APIs and data services to ensure 99.99% availability. You will lead the integration with external platforms such as Google, Meta, and TikTok to maintain consistent data synchronization. Additionally, you will drive platform observability and operational excellence through metrics, tracing, and alerting frameworks. As a senior member of the team, you will mentor junior engineers and actively participate in system-level design and code reviews. Collaborating cross-functionally, you will contribute to the delivery of features involving machine learning, analytics, and optimization engines. Leveraging your expertise in backend development within distributed, scalable systems, you will work with technologies including Kafka, PostgreSQL, ClickHouse, Redis, S3, and object storage-based designs. Your role will also involve applying SOLID principles, clean code practices, and maintaining awareness of infrastructure costs and FinOps. Setting up unit/integration tests, CI/CD pipelines, and rollback strategies will be essential for ensuring the reliability and efficiency of the systems you work on. Key Skills required for this position include a strong background in Java and Microservices architecture, knowledge of distributed systems and high-performance backend services, familiarity with technologies like Kafka, PostgreSQL, ClickHouse, Redis, and S3, as well as a solid understanding of API development, CI/CD pipelines, and observability tools. Your practice of clean code, SOLID principles, and cost-aware infrastructure planning will be valuable assets in this role. To be eligible for this position, you should hold a degree in B.Tech, M.Tech (Dual), M.Tech, MCA, M.Sc., M.E., or CA in Computer Engineering, Computer Science Engineering, or Computer Technology.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |