Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Hybrid
Were looking for a talented and results-oriented Cloud Solutions Architect to work as a key member of Sureifys engineering team. Youll help build and evolve our next-generation cloud-based compute platform for digitally-delivered life insurance. Youll consider many dimensions such as strategic goals, growth models, opportunity cost, talent, and reliability. Youll collaborate closely with the product development team on platform feature architecture such that the architecture aligns with operational needs and opportunities. With the number of customers growing and growing, it’s time for us to mature the fabric our software runs on. This is your opportunity to make a large impact at a high-growth enterprise software company. Key Responsibilities : Collaborate with key stakeholders across our product, delivery, data and support teams to design scalable and secure application architectures on AWS using AWS Services like EC2, ECS, EKS, Lambdas, VPC, RDS, ElastiCache provisioned via Terraform Design and Implement CICD pipelines using Github, Jenkins and Spinnaker and Helm to automate application deployment and updates with key focus on container management, orchestration, scaling, optimizing performance and resource utilization, and deployment strategies Design and Implement security best practices for AWS applications, including Identity and Access Management (IAM), encryption, container security and secure coding practices Design and Implement best practices for Design and implement application observability using Cloudwatch and NewRelic with key considerations and focus on monitoring, logging and alerting to provide insights into application performance and health. Design and implement key integrations of application components and external systems, ensuring smooth and efficient data flow Diagnose and resolve issues related to application performance, availability and reliability Create, maintain and prioritise a quarter over quarter backlog by identifying key areas of improvement such as cost optimization, process improvement, security enhancements etc. Create and maintain comprehensive documentation outlining the infrastructure design, integrations, deployment processes, and configuration Work closely with the DevOps team and as a guide / mentor and enabler to ensure that the practices that you design and implement are followed and imbibed by the team Required Skills: Proficiency in AWS Services such as EC2, ECS, EKS, S3, RDS, VPC, Lambda, SES, SQS, ElastiCache, Redshift, EFS Strong Programming skills in languages such as Groovy, Python, Bash Shell Scripting Experience with CICD tools and practices including Jenkins, Spinnaker, ArgoCD Familiarity with IaC tools like Terraform or Cloudformation Understanding of AWS security best practices, including IAM, KMS Familiarity with Agile development practices and methodologies Strong analytical skills with the ability to troubleshoot and resolve complex issues Proficiency in using observability, monitoring and logging tools like AWS Cloudwatch, NewRelic, Prometheus Knowledge of container orchestration tools and concepts including Kubernetes and Docker Strong teamwork and communication skills with the ability to work effectively with cross function teams Nice to haves AWS Certified Solutions Architect - Associate or Professional
Posted 1 week ago
3.0 - 6.0 years
0 - 3 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Implement and manage AIOps platforms for intelligent monitoring, alerting, anomaly detection, and root cause analysis (RCA). Possess end-to-end knowledge of VLLM model hosting and inferencing. Advanced knowledge of public cloud platforms such as AWS and Azure. Build and maintain machine learning pipelines and models for predictive maintenance, anomaly detection, and noise reduction. Experience in production support and real-time issue handling. Design dashboards and visualizations to provide operational insights to stakeholders. Working knowledge of Bedrock, SageMaker, EKS, Lambda, etc. 1 to 2 years of experience with Jenkins and GoCD to make build/deploy pipelines. Hands-on experience with open-source and self-hosted model APIs using SDKs. Drive data-driven decisions by analyzing operational data and generating reports on system health, performance, and availability. Basic knowledge of kserve and rayserve inferencing . Good knowledge of high level scaling using Karpenter , Keda , System based vertical/horizontal scaling. Strong knowledge on linux operating system or linux certified . Previous experience with Helm chart deployments and Terraform template and module creation is highly recommended. Secondary Responsibilities: Proven experience in AIOps and DevOps, with a strong background in cloud technologies (AWS, Azure, Google Cloud). Proficiency in tools such as Kubeflow, Kserve, ONNX, and containerization technologies (Docker, Kubernetes). Experience with enterprise-level infrastructure, including tools like terraform, helm, and On-Prem servers hosting. Previous experience in fintech or AI based tech companies are highly desirable. Demonstrates the ability to manage workloads effectively in a production environment. Possesses excellent communication and collaboration skills, with a strong focus on cross-functional teamwork.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as a Business Intelligence Engineer III in Pune on a 6-month Contract basis with the TekWissen organization. Your primary responsibility will be to work on Data Engineering on AWS, including designing and implementing scalable data pipelines using AWS services such as S3, AWS Glue, Redshift, and Athena. You will also focus on Data Modeling and Transformation by developing and optimizing dimensional data models to support various business intelligence and analytics use cases. Additionally, you will collaborate with stakeholders to understand reporting and analytics requirements and build interactive dashboards and reports using visualization tools like the client's QuickSight. Your role will also involve implementing data quality checks and monitoring processes to ensure data integrity and reliability. You will be responsible for managing and maintaining the AWS infrastructure required for the data and analytics platform, optimizing performance, cost, and security of the underlying cloud resources. Collaboration with cross-functional teams and sharing knowledge and best practices will be essential for identifying data-driven insights. As a successful candidate, you should have at least 3 years of experience as a Business Intelligence Engineer or Data Engineer, with a strong focus on AWS cloud technologies. Proficiency in designing and implementing data pipelines using AWS services like S3, Glue, Redshift, Athena, and Lambda is mandatory. You should also possess expertise in data modeling, dimensional modeling, data transformation techniques, and experience in deploying business intelligence solutions using tools like QuickSight and Tableau. Strong SQL and Python programming skills are required for data processing and analysis. Knowledge of cloud architecture patterns, security best practices, and cost optimization on AWS is crucial. Excellent communication and collaboration skills are necessary to effectively work with cross-functional teams. Hands-on experience with Apache Spark, Airflow, or other big data technologies, as well as familiarity with AWS DevOps practices and tools, agile software development methodologies, and AWS certifications, will be considered as preferred skills. The position requires a candidate with a graduate degree and TekWissen Group is an equal opportunity employer supporting workforce diversity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
At our organization, we prioritize people and are dedicated to providing cutting-edge AI solutions with integrity and passion. We are currently seeking a Senior AI Developer who is proficient in AI model development, Python, AWS, and scalable tool-building. In this role, you will play a key part in designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. Your responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing Retrieval-Augmented Generation (RAG) systems, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code, and ensuring the scalability, security, and performance of AI solutions in production. To qualify for this role, you should have at least 5 years of experience in AI/ML development, expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. Additionally, you will benefit from health insurance, accident insurance, and a competitive salary based on various factors including location, education, qualifications, experience, technical skills, and business needs. You will also be expected to actively participate in monthly team meetings, team-building efforts, technical discussions, peer reviews, contribute to the OP-Wiki/Knowledge Base, and provide status reports to OP Account Management as required. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields such as AI, cybersecurity, and enterprise architecture. Our team is comprised of dynamic, creative thinkers who are dedicated to delivering quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, innovative training, and education. We are looking for a technology leader with a strong track record of technical excellence and a focus on process and methodology.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Principal Data Engineer (Associate Director) at Fidelity in Bangalore, you will be an integral part of the ISS Data Platform Team. This team plays a crucial role in building and maintaining the platform that supports the ISS business operations. You will have the opportunity to lead a team of senior and junior developers, providing mentorship and guidance, while taking ownership of delivering a subsection of the wider data platform. Your role will involve designing, developing, and maintaining scalable data pipelines and architectures to facilitate data ingestion, integration, and analytics. Collaboration will be a key aspect of your responsibilities as you work closely with enterprise architects, business analysts, and stakeholders to understand data requirements, validate designs, and communicate progress. Your innovative mindset will drive technical advancements within the department, focusing on enhancing code reusability, quality, and developer productivity. By challenging the status quo and incorporating the latest data engineering practices and techniques, you will contribute to the continuous improvement of the data platform. Your expertise in leveraging cloud-based data platforms, particularly Snowflake and Databricks, will be essential in creating an enterprise lake house. Additionally, your advanced proficiency in the AWS ecosystem and experience with core AWS data services like Lambda, EMR, and S3 will be highly valuable. Experience in designing event-based or streaming data architectures using Kafka, along with strong skills in Python and SQL, will be crucial for success in this role. Furthermore, your role will involve implementing data access controls to ensure data security and performance optimization in compliance with regulatory requirements. Proficiency in CI/CD pipelines for deploying infrastructure and pipelines, experience with RDBMS and NOSQL offerings, and familiarity with orchestration tools like Airflow will be beneficial. Your soft skills, including problem-solving, strategic communication, and project management, will be key in leading problem-solving efforts, engaging with stakeholders, and overseeing project lifecycles. By joining our team at Fidelity, you will not only receive a comprehensive benefits package but also support for your wellbeing and professional development. We are committed to creating a flexible work environment that prioritizes work-life balance and motivates you to contribute effectively to our team. To explore more about our work culture and opportunities for growth, visit careers.fidelityinternational.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
As an AWS Cloud Engineer at our company based in Kerala, you will play a crucial role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure solutions on AWS. Your primary responsibility will be to collaborate closely with developers, DevOps engineers, and security teams to support cloud-native applications and business services. Your key responsibilities will include designing, deploying, and maintaining cloud infrastructure using various AWS services such as EC2, S3, RDS, Lambda, and VPC. Additionally, you will be tasked with building and managing CI/CD pipelines, automating infrastructure provisioning using tools like Terraform or AWS CloudFormation, and monitoring and optimizing cloud resources through CloudWatch, CloudTrail, and other third-party tools. Furthermore, you will be responsible for managing user permissions and security policies using IAM, ensuring compliance, implementing backup and disaster recovery plans, troubleshooting infrastructure issues, and responding to incidents promptly. It is essential that you stay updated with AWS best practices and new service releases to enhance our overall cloud infrastructure. To be successful in this role, you should possess a minimum of 3 years of hands-on experience with AWS cloud services, a solid understanding of networking, security, and Linux system administration, as well as experience with DevOps practices and Infrastructure as Code (IaC). Proficiency in scripting languages such as Python and Bash, familiarity with containerization tools like Docker and Kubernetes (EKS preferred), and holding an AWS Certification (e.g., AWS Solutions Architect Associate or higher) would be advantageous. It would be considered a plus if you have experience with multi-account AWS environments, exposure to serverless architecture (Lambda, API Gateway, Step Functions), familiarity with cost optimization, and the Well-Architected Framework. Any previous experience in a fast-paced startup or SaaS environment would also be beneficial. Your expertise in AWS CloudFormation, Kubernetes (EKS), AWS services (EC2, S3, RDS, Lambda, VPC), cloudtrail, cloud, scripting (Python, Bash), CI/CD pipelines, CloudWatch, Docker, IAM, Terraform, and other cloud services will be invaluable in fulfilling the responsibilities of this role effectively.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
delhi
On-site
As a DevOps Engineer at AuditorsDesk, you will be responsible for designing, deploying, and maintaining AWS infrastructure using Terraform for provisioning and configuration management. Your role will involve implementing and managing EC2 instances, application load balancers, and AWS WAF to ensure the security and efficiency of web applications. Collaborating with development and operations teams, you will integrate security practices throughout the software development lifecycle and automate testing and deployment processes using CI/CD pipelines. You should have a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 5 to 10 years of experience working with AWS services and infrastructure. Proficiency in infrastructure as code (IaC) using Terraform, hands-on experience with load balancers, and knowledge of containerization technologies like Docker and Kubernetes are required. Additionally, familiarity with networking concepts, security protocols, scripting languages for automation, and troubleshooting skills are essential for this role. Preferred qualifications include AWS certifications like AWS Certified Solutions Architect or AWS Certified DevOps Engineer, experience with infrastructure monitoring tools such as Prometheus and knowledge of compliance frameworks like PCI-DSS and GDPR. Excellent communication skills and the ability to collaborate effectively with cross-functional teams are key attributes for success in this position. This is a permanent, on-site position located in Delhi with compensation based on industry standards. If you are a proactive and detail-oriented professional with a passion for ensuring high availability and reliability of systems, we invite you to join our team at AuditorsDesk and contribute to making audit work paperless and efficient.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Developer 2 (FSD), you will be responsible for leading the design and delivery of complex end-to-end features across frontend, backend, and data layers. Your role will involve making strategic architectural decisions, reviewing and approving pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Additionally, you will build and maintain shared UI component libraries and backend service frameworks for team reuse. Identifying and eliminating performance bottlenecks in both browser rendering and server throughput will be a crucial part of your responsibilities. You will also be instrumental in instrumenting services with metrics and logging, defining and enforcing comprehensive testing strategies, and owning CI/CD pipelines for automating builds, deployments, and rollback procedures. Ensuring OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices will be key aspects of your role. Furthermore, you will partner with Product, UX, and Ops teams to translate business objectives into technical roadmaps. Facilitating sprint planning, estimation, and retrospectives for predictable deliveries will be part of your routine. Mentoring and guiding SDE-1s and interns, as well as participating in hiring processes, will also be part of your responsibilities. To qualify for this role, you should have at least 3-5 years of experience building production Full stack applications end-to-end with measurable impact. Strong leadership skills in Agile/Scrum environments, proficiency in React (or Angular/Vue), TypeScript, and modern CSS methodologies are required. You should be proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expertise in designing RESTful and GraphQL APIs, scalable database schemas, as well as knowledge of MySQL/PostgreSQL indexing, NoSQL databases, and caching are essential. Experience with containerization (Docker) and AWS services such as Lambda, EC2, S3, API Gateway is preferred. Skills in unit/integration and E2E testing, frontend profiling, backend tracing, and secure coding practices are also important. Strong communication skills, the ability to convey technical trade-offs to non-technical stakeholders, and experience in providing constructive feedback are assets for this role. In addition to technical skills, we value qualities such as a commitment to delivering high-quality software, collaboration abilities, determination, creative problem-solving, openness to feedback, eagerness to learn and grow, and strong communication skills. This position is based in Hyderabad.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have a minimum of 10 years of experience and be proficient in setting up, configuring, and integrating API gateways in AWS. Your expertise should include API frameworks, XML/JSON, REST, and data protection in software design, build, test, and documentation. It is essential to have practical experience with various AWS services such as Lambda, S3, CDN (CloudFront), SQS, SNS, EventBridge, API Gateway, Glue, and RDS. You must be able to effectively articulate and implement projects using these AWS services to enhance business processes through integration solutions. The job is located in Bangalore, Chennai, Mumbai, Noida, and Pune, and requires an immediate joiner. If you meet the requirements and are looking for an opportunity to contribute your skills in AWS integration and API management, we encourage you to apply for this position. To apply, please fill out the form below with your full name, email, phone number, attach your CV/Resume in .pdf, .doc, or .docx format, and include a cover letter. By submitting this form, you agree to the storage and handling of your data as per the website's privacy policy.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be a valuable member of the data engineering team, contributing to the development of data pipelines, data transformations, and exploring new data patterns through proof of concept initiatives. Your role will also involve optimizing existing data feeds and implementing enhancements to improve data processes. Your primary skills should include a strong understanding of RDBMS concepts, hands-on experience with the AWS Cloud platform and its services such as IAM, EC2, Lambda, RDS, Timestream, and Glue. Additionally, proficiency in data streaming tools like Kafka, hands-on experience with ETL/ELT tools, and familiarity with databases like Snowflake or Postgres are essential. It would be beneficial if you have an understanding of data modeling techniques, as this knowledge would be considered a bonus for this role.,
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
karnataka
On-site
You are looking for a skilled .NET Architect with expertise in AWS to join the team in Bangalore. As a .NET Architect, you will be responsible for designing and implementing scalable and secure .NET-based solutions, utilizing AWS cloud services effectively. Your role will involve collaborating with cross-functional teams, evaluating AWS services, and maintaining comprehensive documentation. It is essential to have a strong background in .NET software development, architecture, and AWS services. Your problem-solving skills, communication abilities, and experience in leading software development teams will be crucial for this role. If you have the required experience and expertise, and you are ready to take on this challenge, we would like to hear from you.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 professionals spanning across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant- Databricks Lead Developer. As a Databricks Developer in this role, you will be tasked with solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Keep abreast of new and emerging technologies and assess their potential application for service offerings and products. - Collaborate with architects and lead engineers to devise solutions that meet functional and non-functional requirements. - Demonstrate proficiency in understanding relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess experience in the Data Engineering domain. Qualifications we are looking for: Minimum qualifications: - Bachelor's Degree or equivalency in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - <<>> years of experience in IT. - Familiarity with new and emerging technologies and their possible applications for service offerings and products. - Collaboration with architects and lead engineers to develop solutions meeting functional and non-functional requirements. - Understanding of industry trends and standards. - Strong analytical and technical problem-solving abilities. - Proficiency in either Python or Scala, preferably Python. - Experience in the Data Engineering domain. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. If you are a proactive individual with a passion for innovation and a strong commitment to continuous learning and upskilling, we invite you to apply for this exciting opportunity to join our team at Genpact.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Join our team at Fortinet, a leading cybersecurity company dedicated to shaping the future of cybersecurity and redefining the intersection of networking and security. We are on a mission to protect people, devices, and data worldwide. Currently, we are looking for a dynamic Staff Software Development Engineer to join our rapidly growing business. As a Staff Software Development Engineer at Fortinet, you will play a crucial role in enhancing and expanding our product capabilities. Your responsibilities will include designing and implementing core services, as well as defining the system architecture. We are seeking a highly motivated individual who excels in a fast-paced environment and can contribute effectively to the team. The ideal candidate will possess a can-do attitude, a passion for technology, extensive development experience, and a quick learning ability. Your key responsibilities as a Staff Software Development Engineer will include: - Developing enterprise-grade backend components to improve performance, responsiveness, server-side logic, and platform - Demonstrating a strong understanding of technology selection with well-justified study to support decisions - Troubleshooting, debugging, and ensuring timely resolution of software defects - Participating in functional spec, design, and code reviews - Adhering to standard practices for application code development and maintenance - Actively working towards reducing technical debt in various codebases - Creating high-quality, secure, scalable software solutions based on technical requirements specifications and design artifacts within set timeframes and budgets We are seeking candidates with: - 8-12 years of experience in Software Engineering - Proficiency in Python programming and frameworks like Flask/FastAPI - Solid knowledge of RDBMS (e.g., MySQL, PostgreSQL), MongoDB, Queueing systems, and ES Stack - Experience in developing REST API-based microservices - Strong grasp of data structures and multi-threading/multi-processing programming - Experience in building high-performing, distributed, scalable, enterprise-grade applications - Familiarity with AWS services (ECS, ELB, Lambda, SQS, VPC, EC2, IAM, S3), Docker, and Kubernetes (preferred) - Excellent problem-solving and troubleshooting skills - Ability to effectively communicate technical topics to both technical and business audiences - Self-motivated with the capability to complete tasks with minimal direction - Experience in cyber security engineering is a plus About Our Team: Our team culture is centered around collaboration, continuous improvement, customer-centricity, innovation, and accountability. These values are ingrained in our ethos and culture, fostering a dynamic and supportive environment that promotes excellence and innovation while prioritizing our customers" needs and satisfaction. Why Join Us: We welcome candidates from diverse backgrounds and identities to apply. We offer a supportive work environment and a competitive Total Rewards package designed to enhance your overall health and financial well-being. Embark on a challenging, fulfilling, and rewarding career journey with Fortinet. Join us in delivering solutions that have a meaningful and lasting impact on our 660,000+ customers worldwide.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
The ideal candidate for this position should have a Bachelor's or Master's degree in Computer Science or Computer Engineering, or an equivalent field. You should possess at least 2-6 years of experience in server side development using technologies such as GoLang, Node.JS, or Python. You should demonstrate proficiency in working with AWS services like Lambda, DynamoDB, Step Functions, and S3. Additionally, you should have hands-on experience in deploying and managing Serverless service environments. Experience with Docker, containerization, and Kubernetes is also required for this role. A strong background in database technologies including MongoDB and DynamoDB is preferred. You should also have experience with CI/CD pipelines and automation processes. Any experience in Video Transcoding / Streaming on Cloud would be considered a plus. Problem-solving skills are essential for this role as you may encounter various challenges while working on projects.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
indore, madhya pradesh
On-site
As a Senior Data Scientist with 5+ years of experience, you will play a crucial role in our team based in Indore/Pune. Your responsibilities will involve designing and implementing models, extracting insights from data, and interpreting complex data structures to facilitate business decision-making. You should have a strong background in Machine Learning areas such as Natural Language Processing, Machine Vision, Time Series, etc. Your expertise should extend to Model Tuning, Model Validation, Supervised and Unsupervised Learning. Additionally, hands-on experience with model development, data preparation, and deployment of models for training and inference is essential. Proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis and exploration are key skills required for this role. You should be adept at developing code that enables reproducible data analysis. Familiarity with AWS services like Sagemaker, Lambda, Glue, Step Functions, and EC2 is expected. Knowledge of data science code development and deployment IDEs such as Databricks, Anaconda distribution, and similar tools is essential. You should also possess expertise in ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis. Your skills should include proficiency in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. Experience with end-to-end model deployment and production for at least 1 year is required. Familiarity with Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker is preferred. Basic knowledge of deep learning algorithms like MaskedCNN, YOLO, and visualization and analytics/reporting tools such as Power BI, Tableau, Alteryx would be advantageous for this role.,
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Pune, Ahmedabad
Work from Office
As a Senior platform engineer, you are expected to design and develop key components that power our platform. You will be building a secure, scalable, and highly performant distributed platform that connects multiple cloud platforms like AWS, Azure, and GCP. Job Title: Sr. Platform Engineer Location: Ahmedabad/Pune Experience: 5 + Years Educational Qualification: UG: BS/MS in Computer Science, or other engineering/technical degree Responsibilities: Take full ownership of developing, maintaining, and enhancing specific modules of our cloud management platform, ensuring they meet our standards for scalability, efficiency, and reliability. Design and implement serverless applications and event-driven systems that integrate seamlessly with AWS services, driving the platform's innovation forward. Work closely with cross-functional teams to conceptualize, design, and implement advanced features and functionalities that align with our business goals. Utilize your deep expertise in cloud architecture and software development to provide technical guidance and best practices to the engineering team, enhancing the platform's capabilities. Stay ahead of the curve by researching and applying the latest trends and technologies in the cloud industry, incorporating these insights into the development of our platform. Solve complex technical issues, providing advanced support and guidance to both internal teams and external stakeholders. Requirements: A minimum of 5 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. Proven expertise in serverless development and event-driven architecture design, with a track record of developing and shipping high-quality SaaS platforms on AWS. Comprehensive understanding of cloud computing concepts, architectural best practices, and AWS services, including but not limited to Lambda, RDS, DynamoDB, and API Gateway. Solid knowledge of object-oriented programming (OOP), SOLID principles, and experience with relational and NoSQL databases. Proficiency in developing and integrating RESTful APIs and familiarity with source control systems like Git. Exceptional problem-solving skills, capable of optimizing complex systems. Excellent communication skills, capable of effectively collaborating with team members and engaging with stakeholders. A strong drive for continuous learning and staying updated with industry developments. Nice to Have: AWS Certified Solutions Architect, AWS Certified Developer, or other relevant cloud development certifications. Experience with the AWS Boto3 SDK for Python. Exposure to other cloud platforms such as Azure or GCP. Knowledge of containerization and orchestration technologies, such as Docker and Kubernetes. Experience: 5 years of relevant experience in platform or application development, with a strong emphasis on Python and AWS cloud services. 1+ years of experience working on applications built using Serverless architecture. 1+ years of hands-on experience with Microservices Architecture in live projects. 1+ years of experience applying Domain-Driven Design principles in projects. 1+ years of experience working with Event-Driven Architecture in real-world applications. 1+ years of experience integrating, consuming, and maintaining AWS services. 1+ years of experience working with Boto3 in Python.
Posted 1 week ago
5.0 - 10.0 years
50 - 60 Lacs
Bengaluru
Work from Office
Job Title: AI/ML Architect GenAI, LLMs & Enterprise Automation Location: Bangalore Experience: 8+ years (including 4+ years in AI/ML architecture on cloud platforms) Role Summary We are seeking an experienced AI/ML Architect to define and lead the design, development, and scaling of GenAI-driven solutions across our learning and enterprise platforms. This is a senior technical leadership role where you will work closely with the CTO and product leadership to architect intelligent systems powered by LLMs, RAG pipelines, and multi-agent orchestration. You will own the AI solution architecture end-to-endfrom model selection and training frameworks to infrastructure, automation, and observability. The ideal candidate will have deep expertise in GenAI systems and a strong grasp of production-grade deployment practices across the stack. Must-Have Skills AI/ML solution architecture experience with production-grade systems Strong background in LLM fine-tuning (SFT, LoRA, PEFT) and RAG frameworks Experience with vector databases (FAISS, Pinecone) and embedding generation Proficiency in LangChain, LangGraph , LangFlow, and prompt engineering Deep cloud experience (AWS: Bedrock, ECS, Lambda, S3, IAM) Infra automation using Terraform, CI/CD via GitHub Actions or CodePipeline Backend API architecture using FastAPI or Node.js Monitoring & observability using Langfuse, LangWatch, OpenTelemetry Python, Bash scripting, and low-code/no-code tools (e.g., n8n) Bonus Skills Hands-on with multi-agent orchestration frameworks (CrewAI, AutoGen) Experience integrating AI/chatbots into web, mobile, or LMS platforms Familiarity with enterprise security, data governance, and compliance frameworks Exposure to real-time analytics and event-driven architecture You’ll Be Responsible For Defining the AI/ML architecture strategy and roadmap Leading design and development of GenAI-powered products and services Architecting scalable, modular, and automated AI systems Driving experimentation with new models, APIs, and frameworks Ensuring robust integration between model, infra, and app layers Providing technical guidance and mentorship to engineering teams Enabling production-grade performance, monitoring, and governance
Posted 1 week ago
5.0 - 10.0 years
0 - 0 Lacs
Hyderabad, Bengaluru
Hybrid
Role & responsibilities AWS Lambda, AWS EC2, AWS S3, RESTful APIs, Java, REST API
Posted 1 week ago
6.0 - 11.0 years
20 - 30 Lacs
Bhopal, Hyderabad, Pune
Hybrid
Hello Greetings from NewVision Software!! We are hiring on an immediate basis for the role of Senior / Lead Python Developer + AWS | NewVision Software | Pune, Hyderabad & Bhopal Location | Fulltime Looking for professionals who can join us Immediately or within 15 days is preferred. Please find the job details and description below. NewVision Software PUNE HQ OFFICE 701 &702, Pentagon Tower, P1, Magarpatta City, Hadapsar, Pune, Maharashtra - 411028, India NewVision Software The Hive Corporate Capital, Financial District, Nanakaramguda, Telangana - 500032 NewVision Software IT Plaza, E-8, Bawadiya Kalan Main Rd, near Aura Mall, Gulmohar, Fortune Pride, Shahpura, Bhopal, Madhya Pradesh - 462039 Senior Python and AWS Developer Role Overview: We are looking for a skilled senior Python Developer with strong background in AWS cloud services to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust backend systems, ensuring high performance and responsiveness to requests from the front end. Responsibilities : Develop, test, and maintain scalable web applications using Python and Django. Design and manage relational databases with PostgreSQL, including schema design and optimization. Build RESTful APIs and integrate with third-party services as needed. Work with AWS services including EC2, EKS, ECR, S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, and RDS. Collaborate with front-end developers to deliver seamless end-to-end solutions. Write clean, efficient, and well-documented code following best practices. Implement security and data protection measures in applications. Optimize application performance and troubleshoot issues as they arise. Participate in code reviews, testing, and continuous integration processes. Stay current with the latest trends and advancements in Python, Django, and database technologies. Mentor junior python developers. Requirements : 6+ years of professional experience in Python development. Strong proficiency with Django web framework. Experience working with PostgreSQL, including complex queries and performance tuning. Familiarity with RESTful API design and integration. Strong understanding of OOP, SOLID principles, and design patterns. Strong knowledge of Python multithreading and multiprocessing. Experience with AWS services: S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, IAM, Secret Manager, KMS and RDS. Understanding of version control systems (Git). Knowledge of security best practices and application deployment. Basic understanding of Microservices architecture. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Nice to Have Experience with Docker, Kubernetes, or other containerization tools. Good to have front-end technologies (React). Experience with CI/CD pipelines and DevOps practices. Experience with infrastructure as code tools like Terraform. Education : Bachelors degree in computer science engineering or related field (or equivalent experience). Do share your resume with my email address: imran.basha@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Exp: Python: Yrs, AWS: Yrs, PostgreSQL: Yrs Rest API: Yrs, Django: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - imran.basha@newvision-software.com
Posted 1 week ago
8.0 - 12.0 years
30 - 35 Lacs
Gurugram
Work from Office
Role Description Write and maintain build/deploy scripts. Work with the Sr. Systems Administrator to deploy and implement new cloud infra structure and designs. Manage existing AWS deployments and infrastructure. Build scalable, secure, and cost-optimized AWS architecture. Ensure best practices are followed and implemented. Assist in deployment and operation of security tools and monitoring. Automate tasks where appropriate to enhance response times to issues and tickets. Collaborate with Cross-Functional Teams: Work closely with development, operations, and security teams to ensure a cohesive approach to infrastructure and application security. Participate in regular security reviews and planning sessions. Incident Response and Recovery: Participate in incident response planning and execution, including post-mortem analysis and preventive measures implementation. Continuous Improvement: Regularly review and update security practices and procedures to adapt to the evolving threat landscape. Analyze and remediate vulnerabilities and advise developers of vulnerabilities requiring updates to code. Create/Maintain documentation and diagrams for application/security and network configurations. Ensure systems are monitored using monitoring tools such as Datadog and issues are logged and reported to required parties. Technical Skills Experience with system administration, provisioning and managing cloud infrastructure and security monitoring In-depth. Experience with infrastructure/security monitoring and operation of a product or service. Experience with containerization and orchestration such as Docker, Kubernetes/EKS Hands on experience creating system architectures and leading architecture discussions at a team or multi-team level. Understand how to model system infrastructure in the cloud with Amazon Web Services (AWS), AWS CloudFormation, or Terraform. Strong knowledge of cloud infrastructure (AWS preferred) services like Lambda, Cognito, SQS, KMS, S3, Step Functions, Glue/Spark, CloudWatch, Secrets Manager, Simple Email Service, CloudFront Familiarity with coding, scripting and testing tools. (preferred) Strong interpersonal, coordination and multi-tasking skills Ability to function both independently and collaboratively as part of a team to achieve desired results. Aptitude to pick up new concepts and technology rapidly; ability to explain it to both business & tech stakeholders. Ability to adapt and succeed in a fast-paced, dynamic startup environment. Experience with Nessus and other related infosec tooling Nice-to-have skills Strong interpersonal, coordination and multi-tasking skills Ability to work independently and follow through to achieve desired results. Quick learner, with the ability to work calmly under pressure and with tight deadlines. Ability to adapt and succeed in a fast-paced, dynamic startup environment Qualifications BA/BS degree in Computer Science, Computer Engineering, or related field; MS degree in Computer Science or Computer Engineering ( preferred)
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As a talented software developer at our company, you will have the opportunity to showcase your passion for coding and product building. Your problem-solving mindset and love for taking on new challenges will make you a valuable addition to our rockstar engineering team. You will be responsible for designing and developing robust, scalable, and secure backend architectures using Django. Your focus will be on backend development to ensure the smooth functioning of web applications and systems. Creating high-quality RESTful APIs to facilitate seamless communication between frontend, backend, and other services will also be a key part of your role. In addition, you will play a crucial role in designing, implementing, and maintaining database schemas to ensure data integrity, performance, and security. You will work on ensuring the scalability and reliability of our backend infrastructure on AWS, aiming for zero downtime of systems. Writing clean, maintainable, and efficient code while following industry standards and best practices will be essential. Collaboration is key in our team, and you will conduct code reviews, provide feedback to team members, and work closely with frontend developers, product managers, and designers to plan and optimize features. You will break down high-level business problems into smaller components and build efficient systems to address them. Staying updated with the latest technologies such as LLM frameworks and implementing them as needed will be part of your continuous learning process. Your skills in Python, Django, SQL/PostgreSQL databases, and AWS services will be put to good use as you optimize systems, identify bottlenecks, and resolve them to enhance efficiency. To qualify for this role, you should have a Bachelor's degree in Computer Science or a related field, along with at least 2 years of experience in full-stack web development with a focus on backend development. Proficiency in Python, Django, and experience with Django Rest Framework are required. Strong problem-solving skills, excellent communication, and collaboration abilities are also essential for success in this position.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You should have a strong knowledge of SQL and Python. Experience in Snowflake is preferred. Additionally, you should have knowledge of AWS services such as S3, Lamdba, IAM, Step function, SNS, SQS, ECS, and Dynamo. It is important to have expertise in data movement technologies like ETL/ELT. Good to have skills include knowledge on DevOps, Continuous Integration, and Continuous Delivery with tools such as Maven, Jenkins, Stash, Control-M, Docker. Experience in automation and REST APIs would be beneficial for this role.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As an AI Developer with 5-8 years of experience, you will be based in Pune with a hybrid working model. You should be able to join immediately or within 15 days. Your primary responsibility will be to develop and maintain Python applications, focusing on API building, data processing, and transformation. You will utilize Lang Graph to design and manage complex language model workflows and work with machine learning and text processing libraries to deploy agents. Your must-have skills include proficiency in Python programming with a strong understanding of object-oriented programming concepts. You should have extensive experience with data manipulation libraries like Pandas and NumPy to ensure clean, efficient, and maintainable code. Additionally, you will develop and maintain real-time data pipelines and microservices to ensure seamless data flow and integration across systems. When it comes to SQL, you are expected to have a strong understanding of basic SQL query syntax, including joins, WHERE, and GROUP BY clauses. Good-to-have skills include practical experience in AI development applications, knowledge of parallel processing and multi-threading/multi-processing to optimize data fetching and execution times, familiarity with SQLAlchemy or similar libraries for data fetching, and experience with AWS cloud services such as EC2, EKS, Lambda, and Postgres. If you are looking to work in a dynamic environment where you can apply your skills in Python, SQL, Pandas, NumPy, Agentic AI development, CI/CD pipelines, AWS, and Generative AI, this role might be the perfect fit for you.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Full Stack Developer at Codebase, you will be part of a young software services company that boasts a team of tech-savvy developers. Since our inception in the spring of 2018, we have been on a rapid growth trajectory, catering to software product companies worldwide, with a primary focus on enterprise SaaS, eCommerce, cloud solutions, and application development. Your role will entail a high level of proficiency in TypeScript, with hands-on experience in creating scalable front-end and back-end applications using contemporary frameworks and cloud services. The ideal candidate for this position excels in a dynamic, startup-like setting and is adept at working both autonomously and collaboratively within a team environment. Your responsibilities will include developing responsive and high-performance front-end applications using Qwik, TailwindCSS, and DaisyUI. Additionally, you will be tasked with writing infrastructure-as-code and backend services utilizing AWS CDK, AppSync (GraphQL), and Lambda. Your input will be valuable in making design, architecture, and implementation decisions, while also participating in code reviews, writing tests, and enhancing the overall developer experience. Collaboration with stakeholders and offering integration support will also be key aspects of your role. **Technical Skills Required:** - Possession of at least 3 years of relevant experience in TypeScript for building modern web applications - Proficiency in React-like frameworks, component-based architecture, and client-side rendering - Frontend expertise in Qwik (or similar frameworks like Astro, SolidJS), TailwindCSS, and DaisyUI - Backend proficiency in AWS AppSync (GraphQL), Lambda, and DynamoDB - Familiarity with infrastructure tools such as AWS CDK (TypeScript) and GitHub Actions **Desired Skills:** - Experience in GraphQL schema design - AWS certifications would be a plus The expected working hours necessitate a substantial overlap with the time frame between 6 AM and 8 PM MST (5:30 PM to 2 AM IST). If you are passionate about cutting-edge technology, enjoy working in a fast-paced environment, and are ready to contribute to impactful projects, we invite you to join our team at Codebase.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for mentoring and guiding a team in the role of Technical Lead. In addition, you will support the Technical Architect in designing and exploring new services and modules. Your expertise should include hands-on experience in Java, SpringBoot, and various Spring modules such as Spring MVC, Spring JPA, and Spring Actuators. Furthermore, you must have practical experience with various AWS services including EC2, S3, Lambda, API Gateway, EKS, RDS, Fargate, and CloudFormation. Proficiency in Microservices based architecture and RESTful web services is essential for this role. The ideal candidate will have a notice period of 15 days or less, with preference given to immediate joiners.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough