Jobs
Interviews

29 Aws Ec2 Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

At EY, you will have the opportunity to craft a career that is as unique as you are, leveraging the global scale, support, inclusive culture, and cutting-edge technology to enhance your potential. Your distinctive voice and perspective are valued at EY, as we aim to continuously improve with your input. By joining us, you will not only create an exceptional journey for yourself but also contribute to building a better working world for all. As an Enterprise Data & Analytics - Jr Java Developer, you are expected to have 3+ years of experience in Java development, with at least 1 year working with Apache Camel. A solid knowledge of SQL, UNIX, and agile development methodologies is essential. Strong communication skills are a must-have for this role. Furthermore, any additional experience in areas such as Kafka, Python, OCP containers, AWS EC2, and Snowflake would be highly beneficial. EY is committed to building a better working world by creating long-term value for clients, people, and society while fostering trust in the capital markets. Leveraging data and technology, diverse teams across over 150 countries provide assurance and support clients in their growth, transformation, and operations. Through services spanning assurance, consulting, law, strategy, tax, and transactions, EY teams tackle complex issues by asking insightful questions to uncover innovative solutions for the challenges of today's world.,

Posted 22 hours ago

Apply

5.0 - 9.0 years

0 Lacs

punjab

On-site

As a Tech Lead cum Full Stack Developer at Big Wings, you will be responsible for enhancing the TMS platform using your expertise in React, Node.js, PostgreSQL, and AWS. The ideal candidate for this role should have prior experience in logistics software, API integrations, and scalable architectures. In Front-End Development, you will: - Develop a modern and user-friendly interface using React. - Implement Redux for state management and RTK for making HTTP requests. - Design a clean and efficient UI using Material-UI components. - Optimize performance using Vite for module bundling and fast builds. - Integrate Google Maps API and HERE Maps API for real-time tracking and geolocation services. For Back-End Development, your responsibilities will include: - Developing and maintaining APIs using Node.js with Express. - Implementing JWT-based authentication for secure user access. - Building and maintaining a RESTful API for front-end and third-party integrations. - Optimizing performance for real-time dispatching, load tracking, and vehicle management. Database Management tasks involve: - Using PostgreSQL for structured relational data storage. - Leveraging MongoDB as a NoSQL alternative where needed. - Ensuring database performance, security, and scalability. In terms of Cloud Infrastructure And Deployment, you will: - Deploy and manage services on AWS (EC2 for hosting, S3 for storage, RDS for database management). - Optimizing server performance and cloud costs. - Implementing scalable and secure cloud-based solutions. For Security And Compliance, you will: - Ensure data security and role-based access control (RBAC). - Maintain session timeout mechanisms for inactive users. - Implement logging and audit trails for user activities. Requirements for this role include: - 5+ years of full-stack development experience and 2 years of Team/Project Handling, preferably in logistics or SaaS. - Expertise in React, Redux, Material-UI, RTK, and Vite. - Strong experience in Node.js with Express for backend development. - Hands-on experience with PostgreSQL and MongoDB. - Experience integrating Google Maps API and HERE Maps API. - Cloud expertise in AWS (EC2 S3 RDS). - Strong understanding of RESTful API design and authentication (JWT). Nice to have skills are: - Experience in AI/ML for logistics optimization. - Knowledge of IoT and telematics integrations. - Background in TMS or supply chain software development. Join us at Big Wings and contribute to the growth and innovation of our TMS platform under the guidance of Meenu Baweja.,

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You are an experienced Senior Tech Resource with a background in full-stack development and AWS infrastructure. You have 7-10 years of experience in building, deploying, and maintaining web applications and possess proficiency in various technologies. As a self-starter, you take ownership of projects and lead teams to deliver high-quality solutions. Your qualifications include a Master's / Bachelor's degree in Computer Science, Information Technology, or a related field. You have proven experience in full-stack development, strong proficiency in PHP, JavaScript, and modern front-end frameworks like Codeigniter, Laravel, React, Angular, and Vue.js. You also have a deep understanding of AWS services and experience managing cloud infrastructure. Your familiarity with database technologies such as MySQL, PostgreSQL, and MongoDB, along with excellent problem-solving skills, enables you to work independently. Your key responsibilities involve designing, developing, and maintaining full-stack web applications using modern frameworks and technologies. You will manage AWS infrastructure, collaborate with cross-functional teams to deliver solutions meeting business objectives, lead and mentor junior developers, troubleshoot and optimize application performance, and implement best practices for software development. Your skills in JavaScript, React, MySQL, Angular, PHP, PostgreSQL, Vue.js, AWS, Codeigniter, Laravel, MongoDB, and AWS services (EC2, S3, RDS, Lambda) will be crucial in fulfilling your responsibilities effectively. Strong communication and collaboration skills will further enhance your contribution to the team.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Database Administrator at Unlimit, you will play a crucial role in the administration of a diverse set of databases, including Oracle, MongoDB, PostgreSQL, and MySQL. Your responsibilities will involve working closely with application development teams to ensure that the database design aligns with business requirements. You will be required to analyze the database architecture, implement enhancements to support application needs, and study database load profiles to optimize performance. Your role will also include identifying and mitigating potential bottlenecks, optimizing slow-performing queries, and implementing a proactive database monitoring system. You will provide regular reports on database health, performance, and capacity planning while designing and maintaining automation processes for deploying and upgrading databases in various environments. It will be your responsibility to ensure that the database infrastructure can be quickly deployed and restored using tools like Docker, AWS EC2, and AWS RDS. Collaboration with the security team to implement best practices and maintain the integrity and security of all databases, especially considering the sensitive nature of financial data, will be essential. You will also conduct regular security audits, address vulnerabilities, implement backup strategies, and design recovery plans to minimize downtime and data loss. Furthermore, you will work closely with other IT team members and departments to align strategies and meet the database needs of the organization. Providing technical support and training for staff will also be part of your responsibilities. To qualify for this role, you should have a Bachelor's degree in Computer Science, Information Technology, or a related field, along with a minimum of 3 years of experience as a Database Administrator, preferably in the financial sector. Proficiency in Oracle, MongoDB, PostgreSQL, and MySQL administration, as well as experience with AWS EC2, AWS RDS, and Docker, is required. Strong knowledge of SQL, database performance tuning, and familiarity with financial industry regulations and standards are advantageous. Excellent problem-solving skills, attention to detail, and strong communication skills are also essential for this position. In return, Unlimit offers an attractive monthly salary in line with experience, vacation, sick, and paid holidays, flexible working hours, and a modern workplace equipped with all necessary equipment. You will have the opportunity to work with a team of top international professionals in a multicultural environment. Join the Unlimit team now and be part of our dynamic and innovative organization.,

Posted 2 days ago

Apply

6.0 - 8.0 years

11 - 13 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

iSource Services is hiring for one of their client for the position of Java Developer About the role: We are looking for a skilled Java Developer with strong expertise in Spring Boot, Microservices, and AWS to join our growing team. The ideal candidate must have a proven track record of delivering scalable backend solutions and a minimum of 4 years of hands-on experience with AWS services. Key Responsibilities: Develop and maintain high-performance Java applications using Spring Boot and Microservices architecture Integrate with AWS services including Lambda, DynamoDB, SQS, SNS, S3, ECS, and EC2 Work with event-driven architecture using Kafka Collaborate with cross-functional teams to define, design, and ship new features Ensure the performance, quality, and responsiveness of applications Required Skills: Strong proficiency in Java (8+), Spring Boot, and Microservices Minimum 4 years of hands-on experience with AWS (Lambda, DynamoDB, SQS, SNS, S3, ECS, EC2) Experience with Kafka for real-time data streaming Solid understanding of system design, data structures, and algorithms Excellent problem-solving and communication skills.

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

We are looking for individuals who possess experience in core AWS and GCP services along with strong technical skills. The ideal candidate should have a proven track record in troubleshooting issues across various systems related to Web Applications and databases. Additionally, a solid background in networking and operating systems is required. Key requirements for this role include proficiency in Kubernetes, Docker, and Linux. Responsibilities: - Understanding the current application infrastructure. - Hands-on experience with core AWS services such as AWS EC2, VPC, S3, Kubernetes, Load Balancer, AWS Cloud Watch, Code Deploy, AWS Inspector, and similar technologies in GCP. - Setting up administrator and service accounts, and in-depth knowledge of monitoring AWS instances and services. - Working in AWS and GCP through Command Line Interface (CLI) and management console. - Architecting and configuring Virtual Private Clouds (VPCs). - Proficiency in Terraform and Kubernetes. - Monitoring and auditing systems. - Familiarity with networking concepts like DNS, TCP/IP, and firewalls. - Managing secure, private AWS and GCP clouds. - Building automated infrastructure using open source tools. - Experience in Windows/Linux operating systems and storage technologies. - Hands-on experience with Microsoft Active Directory, DNS, and Group Policies. - Strong troubleshooting and analytical skills in the context of Web Applications and Databases. - Ability to estimate usage costs and implement operational cost control measures. - Previous experience working on small or medium production Cloud environments. - Relevant experience as a Cloud Engineer. This position requires a proactive and detail-oriented individual who can effectively manage cloud environments and contribute to the overall success of the team.,

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Angular Developer at nerdAppLabs Software Solutions Pvt. Ltd., you will be a part of a dynamic team working on cutting-edge technology in the fields of IIoT, AI, and Edge MLOps. We are a trusted partner for various companies, contributing to the development and support of products across different domains. Our collaboration with platforms like SugarCRM, Fledge, FogLAMP, MyMap, and OptTown has enabled organizations to excel in their respective industries. At nerdAppLabs, we take pride in our role in the Commercial Open Source Industrial IoT (IIoT) and Edge AI/ML space, where we are actively involved in creating a leading platform for Industrial Data Pipelines, Insights, and Actions. We are dedicated to shaping the future through innovation and technology. We are seeking passionate individuals who are enthusiastic about the rapidly growing fields of IIoT, AI, and Edge MLOps. Our company culture is centered around fostering a supportive and positive environment driven by motivated team players. We value diversity in people, ideas, and backgrounds, which contributes to a sense of community within our organization. At nerdAppLabs, we firmly believe in innovation and the delivery of creative solutions to meet the challenges of tomorrow. As an engineering partner to Dianomic Systems, we are actively involved in building open-source solutions under the Linux Foundation and commercial IIoT products for IT/OT integration with intelligence. Join us at nerdAppLabs and become part of a team that is driving innovation in IIoT, Machine Learning, AI, and Edge Computing! Role Overview: We are looking for a skilled Angular Developer who excels at translating complex backend logic and REST APIs into user-friendly visual interfaces. Your primary focus will be on data-driven UI/UX design, where the design process starts with system behavior rather than just Figma files. You will be responsible for building modern, modular applications that visualize data flows, facilitate user interactions with pipelines, and provide real-time system feedback. Your work will involve creating control panels, dashboards, and visual builders that are grounded in architecture, data, and storytelling. Your Responsibilities: - Develop Angular components to visualize intricate backend pipelines and workflows - Focus on UI/UX design, translating architectural details and API specifications into engaging, real-time interfaces - Utilize Browser Dev Tools for effective debugging of Angular applications - Implement design patterns in Angular such as component-based, observer, singleton, factory, etc. - Write E2E tests for interactive interfaces using Cypress or Playwright - Deploy applications using Docker on NGINX, Apache, or cloud VM instances (e.g., AWS EC2) - Contribute to modular architecture patterns like plugin-based or micro frontends - Collaborate closely with system architects and backend developers Required Skills: - 3+ years of experience in Angular (v10+) with advanced TypeScript and RxJS knowledge - Proficiency in HTML5, CSS3, and responsive web design - Strong understanding of REST API integration, JSON handling, and data binding - Experience working directly from system designs, API contracts, or flow specifications - Focus on UI/UX design without relying solely on pixel-perfect mockups - Familiarity with Bulma, Bootstrap, or CSS utility frameworks - Solid Git skills including branching, pull requests, and conflict resolution - Knowledge of NGINX/Apache for frontend deployment - Experience in writing automated UI tests with Cypress or Playwright - Comfortable with Docker and cloud VMs (e.g., AWS EC2) Additional Skills to Consider: - Familiarity with SCSS (Sassy CSS) and its advanced features - Experience in building dashboards, admin panels, or analytics UIs - Knowledge of graph libraries like Chart.js, Plotly.js - Strong understanding of UI/UX principles, color theory, typography, and design systems - Proficiency in tools like Figma or Excalidraw for wireframing and prototyping Strongly Preferred: - Experience with visual programming or flow-based UIs such as Rete.js or Node-RED - Understanding of micro frontend patterns and plugin-based architecture - Previous work in IoT, data pipelines, or centralized management systems for Edge devices - Familiarity with event-driven UIs, system state management (NgRx, etc.) - Experience with JSON fixtures/stubs for APIs and knowledge of Swagger/OpenAPI tools What You'll Gain: - Ownership over rich, technical, visual frontends driving real-world workflows - The opportunity to shape interface logic directly from architecture and APIs - A modern tech stack with flexibility and autonomy - A team culture that values engineering insight, creativity, and impact Join us at nerdAppLabs and be a part of a team that is revolutionizing the future of technology in IIoT, Machine Learning, AI, and Edge Computing!,

Posted 1 week ago

Apply

2.0 - 5.0 years

0 - 0 Lacs

Nagpur

Remote

Key Responsibilities: Provision and manage GPU-based EC2 instances for training and inference workloads. Configure and maintain EBS volumes and Amazon S3 buckets (versioning, lifecycle policies, multipart uploads) to handle large video and image datasets . Build, containerize, and deploy ML workloads using Docker and push images to ECR . Manage container deployment using Lambda , ECS , or AWS Batch for video inference jobs. Monitor and optimize cloud infrastructure using CloudWatch, Auto Scaling Groups , and Spot Instances to ensure cost efficiency. Set up and enforce IAM roles and permissions for secure access control across services. Collaborate with the AI/ML, annotation, and backend teams to streamline cloud-to-model pipelines. Automate cloud workflows and deployment pipelines using GitHub Actions , Jenkins , or similar CI/CD tools. Maintain logs, alerts, and system metrics for performance tuning and auditing. Required Skills: Cloud & Infrastructure: AWS Services : EC2 (GPU), S3, EBS, ECR, Lambda, Batch, CloudWatch, IAM Data Management : Large file transfer, S3 Multipart Uploads, storage lifecycle configuration, archive policies (Glacier/IA) Security & Access : IAM Policies, Roles, Access Keys, VPC (preferred) DevOps & Automation: Tools : Docker, GitHub Actions, Jenkins, Terraform (bonus) Scripting : Python, Shell scripting for automation & monitoring CI/CD : Experience in building and managing pipelines for model and API deployments ML/AI Environment Understanding: Familiarity with GPU-based ML workloads Knowledge of model training, inference architecture (batch and real-time) Experience with containerized ML model execution is a plus Preferred Qualifications: 2-5 years of experience in DevOps or Cloud Infrastructure roles AWS Associate/Professional Certification (DevOps/Architect) is a plus Experience in managing data-heavy pipelines , such as drones, surveillance, or video AI systems

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Vola Finance is a rapidly expanding fintech company that is transforming the landscape of financial access and management. Our cutting-edge platform empowers individuals to enhance their financial well-being and take charge of their expenditures through a range of innovative tools and solutions. With the support of top-tier investors, we are dedicated to crafting products that have a significant positive impact on the lives of our users. Our founding team comprises enthusiastic leaders with extensive backgrounds in finance and technology. Drawing upon their vast experience from leading global corporations, they are committed to cultivating a culture of creativity, teamwork, and excellence within our organization. As a member of our team, your primary responsibilities will include: - Developing churn prediction models utilizing advanced machine learning algorithms based on user transactional and behavioral data - Constructing regression models to predict users" income and balances using transaction data - Creating customer segmentation and recommendation engines for cross-selling initiatives - Building natural language processing models to gauge customer sentiment - Developing propensity models and conducting lifetime value (LTV) analysis - Establishing modern data pipelines and processing systems using AWS PAAS components like Glue and Sagemaker Studio - Utilizing API tools such as REST, Swagger, and Postman - Deploying models in the AWS environment and managing the production setup - Collaborating effectively with cross-functional teams to collect data and derive insights Essential Technical Skill Set: 1. Prior experience in Fintech product and growth strategy 2. Proficiency in Python 3. Strong grasp of linear regression, logistic regression, and tree-based machine learning algorithms 4. Sound knowledge of statistical analysis and A/B testing 5. Familiarity with AWS services such as Sagemaker, S3, EC2, and Docker 6. Experience with REST API, Swagger, and Postman 7. Proficiency in Excel 8. Competence in SQL 9. Ability to work with visualization tools like Redash or Grafana 10. Familiarity with versioning tools like Bitbucket, Github, etc.,

Posted 1 week ago

Apply

5.0 - 12.0 years

0 - 0 Lacs

chennai, tamil nadu

On-site

Job Description All Care Therapies, a rapidly growing IT and Medical back office Management Company, is seeking a talented and experienced Full Stack Developer (React/Next JS) to join their team in Chennai. As a Full Stack Developer, you will be responsible for providing technical support, troubleshooting production issues, and maintaining high availability of services. You will work with cross-functional teams to analyze and resolve application-related issues while enhancing existing web applications and contributing to the development of modern web applications. Key Responsibilities: - Provide technical support and troubleshooting for production issues. - Work with cross-functional teams to analyze, debug, and resolve application-related issues. - Maintain and enhance existing web applications built using C#, ASP.NET 4.7 Framework, jQuery, and Entity Framework. - Handle data-related queries and optimize performance using SQL Server and LINQ. - Understand and troubleshoot reports built using AWS QuickSight. - Contribute to development, maintenance, and bug fixes of modern web applications built using React and Next.js (as needed). Required Skills: - Strong hands-on experience with React and Next.js. - Solid understanding of SQL Server, stored procedures, and performance tuning. - Proficiency in C#, ASP.NET (4.7), MVC Web API, Entity Framework, and LINQ. - Proficiency in jQuery, JavaScript, and front-end troubleshooting. - Exposure to Telerik Scheduler or similar scheduling controls. - Familiarity with AWS services EC2, RDS, and basic cloud infrastructure. - Experience using JIRA for ticketing and agile sprint management. - Knowledge of GitHub for code versioning and collaboration. - Good communication skills and ability to work independently in a remote-support role. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or related field from any reputed college or institutes. - Exposure to healthcare or therapy scheduling domain is a plus. - Experience with AWS QuickSight dashboards and reporting is an advantage. Why Join Us - Opportunity to work in a fast-growing healthcare technology company. - Direct involvement with US-based clients and products. - Competitive salary and night shift allowance. - Friendly work culture with long-term growth. - Group Health Insurance. - Leave Encashment on Gross. - Yearly Bonus. - 12 Paid Indian & US Holidays. - US Visa. If you have the required experience and are ready to join a dynamic team, apply now for this exciting opportunity at All Care Therapies. We offer competitive compensation packages and a comprehensive benefits program.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As an Associate Software Engineer at SmartBear, you will play a crucial role in the transformation of QMetry Test Management for Jira. You will be tasked with solving complex business problems and developing highly scalable applications that offer exceptional user experiences. Working under the guidance of the Lead Engineer, you will design, document, and implement new systems in Java 17/21, while adhering to security and Java best practices. Your responsibilities will include developing backend services and REST APIs using Java, Spring Boot, and JSON, as well as creating new products, writing code as per product requirements, and contributing to automated and system testing. You will collaborate with both technical and business stakeholders to ensure the delivery of high-quality products that meet business requirements. Additionally, you will be involved in developing scalable real-time low-latency data egress/ingress solutions in an agile environment. The ideal candidate for this role should have 2-4 years of experience working with Java 17 platform or higher and possess a Bachelor's Degree in Computer Science, Computer Engineering, or a related technical field. Proficiency in API-driven development, OOPs, Java, Spring Framework, and JPA is required. Experience with relational databases such as MySQL, PostgreSQL, MSSQL, Oracle, and familiarity with AWS services, Docker, GitHub, and Agile methodologies are desirable. Prior exposure to Atlassian suite of Products and SCRUM environment is a plus. Joining the SmartBear team offers you the opportunity to grow your career at every level. We value your success and provide collaborative workspaces where teams can work, innovate, and enjoy. Our culture celebrates individuality and diversity, and we believe that embracing differences leads to better outcomes. SmartBear is dedicated to ethical corporate practices, social responsibility, and making a positive impact in the communities we serve. If you are looking to be part of a dynamic team driving innovation in software development, SmartBear could be the perfect fit for you. Explore the possibilities with us and contribute to building great software solutions that empower developers, testers, and software engineers worldwide.,

Posted 1 week ago

Apply

2.0 - 7.0 years

8 - 10 Lacs

Bengaluru, HSR Layout

Work from Office

1. Build big, robust, scalable, and maintainable applications 2. Debug, fix bugs, identify performance issues, and improve app performance 3. Set up and manage AWS EC2 instances, S3 buckets, and related services 4. Manage server configurations and deployments 5. Continuously discover, evaluate, and implement new technologies to maximize development efficiency 6. Handle complex technical issues related to web app development and discuss solutions with the team 7. Manage and monitor application deployment using DevOps tools 8. Configure and maintain AWS EC2 instances 9. Set up and manage Docker containers 10. Implement automated deployment pipelines 11. Monitor application performance and server health 12. Manage database backups and recovery procedures 13. Handle server security and SSL certificate management Requirements : 1. Proficient understanding of Python, with knowledge of the Python web framework Django 2. Knowledge of Version control like Git 3. Understanding of the threading limitations of Python and multi-process architecture 4. Understanding of fundamental design principles behind a scalable application 5. Experience with AWS services (EC2, S3, CloudWatch) and basic Linux commands 6. Knowledge of Docker and container orchestration 7. Basic understanding of CI/CD pipelines and DevOps practices 8. Familiarity with deployment strategies and server configuration

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an AI Developer with 5-8 years of experience, you will be based in Pune with a hybrid working model. You should be able to join immediately or within 15 days. Your primary responsibility will be to develop and maintain Python applications, focusing on API building, data processing, and transformation. You will utilize Lang Graph to design and manage complex language model workflows and work with machine learning and text processing libraries to deploy agents. Your must-have skills include proficiency in Python programming with a strong understanding of object-oriented programming concepts. You should have extensive experience with data manipulation libraries like Pandas and NumPy to ensure clean, efficient, and maintainable code. Additionally, you will develop and maintain real-time data pipelines and microservices to ensure seamless data flow and integration across systems. When it comes to SQL, you are expected to have a strong understanding of basic SQL query syntax, including joins, WHERE, and GROUP BY clauses. Good-to-have skills include practical experience in AI development applications, knowledge of parallel processing and multi-threading/multi-processing to optimize data fetching and execution times, familiarity with SQLAlchemy or similar libraries for data fetching, and experience with AWS cloud services such as EC2, EKS, Lambda, and Postgres. If you are looking to work in a dynamic environment where you can apply your skills in Python, SQL, Pandas, NumPy, Agentic AI development, CI/CD pipelines, AWS, and Generative AI, this role might be the perfect fit for you.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

As an Associate Software Engineer III at SmartBear, you will play a key role in solving challenging business problems and building highly scalable applications that offer exceptional user experiences. Reporting to the Lead Engineer, you will be responsible for developing solutions using the latest tools and technologies, participating in problem resolution, and effectively communicating status, issues, and risks in a timely manner. Your primary responsibilities will include designing, documenting, and implementing new systems in Java 17/21, developing backend services and REST APIs using Java, Spring Boot, and JSON, and contributing to system testing and agile development processes. You will collaborate with both business and technical stakeholders to deliver high-quality products and services that meet business requirements while staying abreast of the latest technologies. To be successful in this role, you should have 2-4 years of experience working with Java 17 platform or higher, a Bachelor's Degree in Computer Science, Computer Engineering, or a related field, and a solid understanding of API-driven development, OOPs, Java, Spring Framework, and JPA. Experience with relational databases, AWS services, Atlassian suite of products, and Agile methodologies is highly desirable. At SmartBear, we offer a supportive environment where you can grow your career at every level. We value your success and well-being, and we encourage a healthy work-life balance by celebrating our team members and promoting a culture of inclusivity and diversity. Join us in making our technology-driven world a better place and be part of a team that is committed to ethical practices and social responsibility. SmartBear is headquartered in Somerville, MA, with offices worldwide, including locations in Ireland, the UK, Poland, and India. Our dedication to innovation and excellence has earned us prestigious industry awards, and we take pride in creating a workplace where every individual's unique experiences and perspectives contribute to our collective success.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

jaipur, rajasthan

On-site

As an AI/ML Engineer (Python) at Telepathy Infotech, you will be responsible for building and deploying machine learning and GenAI applications in real-world scenarios. You will be part of a passionate team of technologists working on innovative digital solutions for clients across industries. We value continuous learning, ownership, and collaboration in our work culture. To excel in this role, you should have strong Python skills and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch. Experience in GenAI development using APIs such as Google Gemini, Hugging Face, Grok, etc. is highly desirable. A solid understanding of ML, DL, NLP, and LLM concepts is essential along with hands-on experience in Docker, Kubernetes, and CI/CD pipeline creation. Familiarity with Streamlit, Flask, FastAPI, MySQL/PostgreSQL, AWS services (EC2, Lambda, RDS, S3, API Gateway), LangGraph, serverless architectures, and vector databases like FAISS, Pinecone, will be advantageous. Proficiency in version control using Git is also required. Ideally, you should have a B.Tech/M.Tech/MCA degree in Computer Science, Data Science, AI, or a related field with at least 1-5 years of relevant experience or a strong project/internship background in AI/ML. Strong communication skills, problem-solving abilities, self-motivation, and a willingness to learn emerging technologies are key qualities we are looking for in candidates. Working at Telepathy Infotech will provide you with the opportunity to contribute to impactful AI/ML and GenAI solutions while collaborating in a tech-driven and agile work environment. You will have the chance to grow your career in one of India's fastest-growing tech companies with a transparent and supportive company culture. To apply for this position, please send your CV to hr@telepathyinfotech.com or contact us at +91-8890559306 for any queries. Join us on our journey of innovation and growth in the field of AI and ML at Telepathy Infotech.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be a key member of our team as an AWS EC2 and RDS Oracle Database Administrator, bringing 3 to 5 years of experience to the table. Your expertise will revolve around managing AWS EC2 instances, RDS Oracle databases, and troubleshooting for Windows and Linux systems. In this role, you will play a crucial part in ensuring the high availability and reliability of our infrastructure while collaborating with cross-functional teams to tackle technical challenges effectively. Your responsibilities will include managing and maintaining AWS EC2 instances, provisioning, monitoring, and troubleshooting, as well as administering RDS Oracle databases, covering installation, configuration, backup, and recovery processes. Additionally, you will be expected to perform basic troubleshooting for both Windows and Linux operating systems, monitor system performance, and contribute to the development and implementation of best practices for cloud infrastructure and database management. To excel in this role, you should possess a strong grasp of Oracle CORE on-premises and in AWS cloud environments, with secondary expertise in MSSQL DBA. Your qualifications will ideally include 3 to 5 years of experience working with AWS EC2 instances and RDS Oracle databases, proficiency in basic troubleshooting for Windows and Linux systems, and a deep understanding of cloud computing concepts and best practices. Problem-solving skills, attention to detail, the ability to work both independently and collaboratively, as well as excellent written and verbal communication skills are essential for success in this position. Holding relevant certifications such as AWS Certified Solutions Architect or AWS Certified SysOps Administrator would be a valuable asset.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

bangalore, hyderabad, pune

On-site

Job Description: We are hiring experienced Spring Boot Developers with AWS expertise to build scalable backend applications and cloud-native solutions. The ideal candidate should be well-versed in microservices architecture, REST APIs, and hands-on cloud deployment using AWS. Roles & Responsibilities: Design, develop, and maintain microservices-based applications using Spring Boot Integrate applications with AWS services such as EC2, S3, Lambda, RDS, etc. Build RESTful APIs and ensure secure, scalable, and high-performing applications Write clean and efficient code following best coding practices Collaborate with frontend developers, DevOps, and QA teams Work with containerization tools like Docker and orchestration using Kubernetes Optimize performance, troubleshoot issues, and handle production deployments Participate in code reviews, agile ceremonies, and continuous improvement processes Requirements: Bachelors/Masters degree in Computer Science or related field 26 years of experience in backend development with Spring Boot Strong hands-on knowledge of AWS cloud services Proficiency in Java, JPA/Hibernate, and SQL/NoSQL databases Experience with REST APIs, microservices, and cloud-native design patterns Familiarity with Git, CI/CD pipelines, Jenkins, and Agile methodologies Experience with Docker, Kubernetes, and monitoring tools is a plus Strong problem-solving and communication skills To Apply: Please Walk-in Directly (Monday to Saturday, 9 AM to 6 PM) Free Job Placement Assistance White Horse Manpower Get placed in Fortune 500 companies. Address: #12, Office 156, 3rd Floor, Jumma Masjid Golden Complex, Jumma Masjid Road, Bangalore 560051 Contact Numbers: 9632024646 - 8550878550

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

You should have hands-on experience working with Elasticsearch, Logstash, Kibana, Prometheus, and Grafana monitoring systems. Your responsibilities will include installation, upgrade, and management of ELK, Prometheus, and Grafana systems. You should be proficient in ELK, Prometheus, and Grafana Administration, Configuration, Performance Tuning, and Troubleshooting. Additionally, you must have knowledge of various clustering topologies such as Redundant Assignments, Active-Passive setups, and experience in deploying clusters on multiple Cloud Platforms like AWS EC2 & Azure. Experience in Logstash pipeline design, search index optimization, and tuning is required. You will be responsible for implementing security measures and ensuring compliance with security policies and procedures like CIS benchmark. Collaboration with other teams to ensure seamless integration of the environment with other systems is essential. Creating and maintaining documentation related to the environment is also part of the role. Key Skills required for this position include certification in monitoring systems like ELK, RHCSA/RHCE, experience on the Linux Platform, and knowledge of Monitoring tools such as Prometheus, Grafana, ELK stack, ManageEngine, or any APM tool. Educational Qualifications should include a Bachelor's degree in Computer Science, Information Technology, or a related field. The ideal candidate should have 4-7 years of relevant experience and the work location for this position is Mumbai.,

Posted 2 weeks ago

Apply

4.0 - 6.0 years

4 - 9 Lacs

Kolkata, Pune, Chennai

Work from Office

We are seeking an experienced Python Developer with strong hands-on expertise in AWS cloud services and data libraries. The ideal candidate will be proficient in designing and deploying applications using Python, AWS (Lambda, EC2, S3), and familiar with DevOps tools such as GitLab. Experience with NumPy and Pandas for data processing or ML-related tasks is essential.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

Enlog is a leading provider of electricity management solutions for energy-conscious smart buildings. Our mission is to create a greener earth by promoting the smart and efficient use of electricity. Since our inception in 2017, we have focused on transforming homes and businesses into energy-conscious smart spaces, enhancing quality of life and comfort through our innovative energy management technologies. Join our dynamic team and be part of our journey to make energy-conscious living a reality. As a member of our team, you will be responsible for: - Possessing in-depth knowledge of object-relational mapping, server-side logic, and REST API. - Demonstrating expertise in databases such as MySQL, PostgreSQL, and other relational and non-relational databases. - Utilizing AWS Lightsail, Celery, Celery-beat, Redis, and Docker. - Conducting testing of REST APIs using Postman. - Managing code and projects on Git to ensure synchronization with other team members and managers. - Coordinating with front-end developers to ensure seamless integration. Why You Should Work Here: At Enlog, we value innovation, collaboration, and continuous improvement. By joining our team, you will have the opportunity to work on cutting-edge technologies and contribute to projects that have a significant impact on energy management and sustainability. Our dynamic and supportive work environment fosters personal and professional growth. We are committed to maintaining a diverse and inclusive workplace that enables everyone to thrive. Joining Enlog means becoming part of a team dedicated to promoting smart and efficient energy use, and making a positive impact on the environment and society. Technologies We Use: - AWS EC2 - AWS Lightsail - Docker - Kafka - PostgreSQL - Golang - Django REST Framework - MQTT protocols - Kubernetes - PgBouncer - Clickhouse - Scylladb - DragonFly About Enlog: Founded in 2017, Enlog provides electricity management solutions for energy-conscious smart buildings. Our innovations in energy management enhance quality of life and comfort while promoting responsible electricity use. Our flagship product, Smi-Fi, is an energy-assistant IoT device that encourages energy conservation in residential and commercial spaces. With over 3,000 active installations and a growing customer base, we are at the forefront of energy management solutions in India.,

Posted 3 weeks ago

Apply

0.0 - 3.0 years

3 - 7 Lacs

Thane

Hybrid

Responsibilities: Provide first-line and second-line technical support to customers via email, phone, or chat. Diagnose and resolve software issues, bugs, and technical queries efficiently and effectively. Create and maintain knowledge base articles, documentation, and FAQs for both internal and customer use. Assist with system monitoring and performance tuning to ensure software stability. Assist customers in product feature usage, configurations, and best practices. Provide training to end-users or internal teams on new features and functionalities. Log and track incidents in the support management system , ensuring that all issues are addressed promptly. Stay up-to-date with the latest software releases, patches, and updates.

Posted 1 month ago

Apply

7.0 - 9.0 years

27 - 30 Lacs

Bengaluru

Work from Office

We are seeking experienced Data Engineers with over 7 years of experience to join our team at Intuit. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 7+ years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Chennai, Bengaluru

Work from Office

Mandatory keyskills : Athena, Step Functions, Spark - Pyspark, ETL Fundamentals, SQL (Basic + Advanced), Glue, Python, Lambda, Data Warehousing, EBS /EFS, AWS EC2, Lake Formation, Aurora, S3, Modern Data Platform Fundamentals, PLSQL, Cloud front We are looking for an experienced AWS Data Engineer to design, build, and manage robust, scalable, and high-performance data pipelines and data platforms on AWS. The ideal candidate will have a strong foundation in ETL fundamentals, data modeling, and modern data architecture, with hands-on expertise across a broad spectrum of AWS services including Athena, Glue, Step Functions, Lambda, S3, and Lake Formation. Key Responsibilities : Design and implement scalable ETL/ELT pipelines using AWS Glue, Spark (PySpark), and Step Functions. Work with structured and semi-structured data using Athena, S3, and Lake Formation to enable efficient querying and access control. Develop and deploy serverless data processing solutions using AWS Lambda and integrate them into pipeline orchestration. Perform advanced SQL and PL/SQL development for data transformation, analysis, and performance tuning. Build data lakes and data warehouses using S3, Aurora, and Athena. Implement data governance, security, and access control strategies using AWS tools including Lake Formation, CloudFront, EBS/EFS, and IAM. Develop and maintain metadata, lineage, and data cataloging capabilities. Participate in data modeling exercises for both OLTP and OLAP environments. Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Monitor, debug, and optimize data pipelines for reliability and performance. Required Skills & Experience: Strong experience with AWS data services: Glue, Athena, Step Functions, Lambda, Lake Formation, S3, EC2, Aurora, EBS/EFS, CloudFront. Proficient in PySpark, Python, SQL (basic and advanced), and PL/SQL. Solid understanding of ETL/ELT processes and data warehousing concepts. Familiarity with modern data platform fundamentals and distributed data processing. Experience in data modeling (conceptual, logical, physical) for analytical and operational use cases. Experience with orchestration and workflow management tools within AWS. Strong debugging and performance tuning skills across the data stack.

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad

Remote

Role & responsibilities 1. Prepare Helm charts and package applications for deployment. 2. Create manifests and database tunnels for seamless development and testing. 3. Create and maintain development support tools and CI/CD pipelines for multiple projects. 4. Understand the product and create dependency maps to ensure smooth project workflows. 5. Maintain and optimize DevOps tools, including GitLab on-premises and GitPods. 6. Support and configure container registries, code scanners, and code reporting tools. 7. Integrate and execute testing processes within CI/CD pipelines. 8. Utilize Terraform for infrastructure provisioning and management. Operational 9. Gain expertise in databases, including backups, restores, high availability, and failover strategies. 10. Implement least privileged access and set up database tunneling for developer access. 11. Ensure comprehensive backups of repositories and branching structures. 12. Demonstrate proficiency in Kubernetes and Docker, with hands-on experience in CRDs, StatefulSets, PVs, PVCs, Docker volumes, and security contexts. 13. Experience with Helm for Kubernetes package management. 14. Utilize Ansible for configuration management. 15. Possess practical knowledge and experience in Infrastructure as Code (IAC), VMware vSphere, Linux, and configuration management. 16. Implement , provision, and monitor a fleet of servers. People 17. Monitor infrastructure using tools such as Prometheus, Grafana, and Alert Manager. 18. Work with logs aggregation systems, write queries, and set up log shipping. 19. Have hands-on experience with Python and Bash scripting to automate routine tasks. 20. Use practical knowledge of CNCF incubated tools like Longhorn, Velero, Kasten, Harbor, and Rancher to build and maintain private clouds. 21. Implement DevSecOps practices and security tools to enhance the security of infrastructure, network, and storage layers. Preferred candidate profile 1. Bachelor's degree in a related field or equivalent work experience 2. Proficiency with scripting languages (Python, Bash) for automation 3. Excellent understanding of GCP, AWS EC2, LInux, Kubernetes, Docker, Helm, Terraform, Ansible, Jenkins, Gitlab-ci, Gitlab runner, longhorn, k3s, velero backup, minio and other related technologies.

Posted 1 month ago

Apply

12.0 - 17.0 years

30 - 35 Lacs

Bengaluru

Work from Office

The Role: Sr. Engineer, Database Engineering. The Team: We are looking for a highly self-motivated hands-on Sr. Engineer, Database Engineering who would focus on our database infrastructures estate and automations and DevOps engineering within our enterprise solutions division. The Impact: This is an excellent opportunity to join Enterprise Solutions as we transform and harmonize our infrastructure into a unified place while also developing your skills and furthering your career as we plan to power the markets of the future. Whats in it for you: This is the place to hold your existing Database, Infrastructure, DevOps and Leadership skills while having the chance to become exposed to fresh and divergent technologies (e.g. AWS/ Snowflake/ Terraforms/Python/CI/CD) Responsibilities: Team Leadership: Lead and mentor a team of DBAs, fostering a collaborative and high-performance work environment. Assign tasks, manage workloads, and ensure team members meet project deadlines. Conduct performance reviews and identify training needs to enhance technical capabilities. Database Management: Oversee the installation, configuration, and maintenance of SQL Server, Oracle, and other database systems. Manage and optimize databases hosted on **AWS RDS** and **AWS EC2** for performance, scalability, and security. Implement automated backup, restore, and recovery strategies for cloud-based databases. Manage database security policies, ensuring protection against unauthorized access. Performance & Optimization: Monitor database performance and proactively implement tuning strategies. Optimize AWS RDS instances and EC2-hosted databases for cost efficiency and performance. Analyze system logs, resolve issues, and ensure minimal downtime. Project & Change Management: Collaborate with development teams to support database design, deployment, and schema changes. Manage database migrations, upgrades, and patching processes, including AWS services. Incident & Problem Management: Act as an escalation point for critical database issues. Drive root cause analysis for incidents and ensure preventive measures are implemented. Documentation & Compliance: Maintain accurate documentation of database configurations, processes, and recovery procedures. Ensure compliance with data governance, security standards, and AWS best practices. What Were Looking For: Technical Expertise : Proficient in SQL Server, Oracle, AWS RDS, and EC2 database environments. Cloud Knowledge : Strong understanding of AWS database services, including security, scaling, and cost optimization. Leadership Skills : Proven experience managing a DBA team or leading technical projects. Problem-Solving : Strong analytical skills with a proactive approach to troubleshooting. Communication : Excellent verbal and written communication skills for effective collaboration. Certifications : Preferred certifications include AWS Certified Database - Specialty, Microsoft Certified: Azure Database Administrator Associate, Oracle DBA certifications, or equivalent. Experience Requirements: Minimum 12+ years of hands-on DBA experience. At least 2 years of experience in a leadership or team lead role. Experience working with AWS RDS, AWS EC2, and on-premises database environments. Preferred Skills: Experience in PowerShell, T-SQL, and Python for automation. Knowledge of CI/CD pipelines and DevOps practices for database deployments.

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies