Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
The client is a global technology consulting and digital solutions company with a vast network of entrepreneurial professionals spread across more than 30 countries. They cater to over 700 clients, leveraging their domain and technology expertise to drive competitive differentiation, enhance customer experiences, and improve business outcomes. As a part of the agile team, you will be responsible for developing applications, leading design sprints, and ensuring timely deliveries. Your role will involve designing and implementing low-latency, high-availability, and high-performance applications. You will also be required to ensure code modularity using microservices architecture in both frontend and backend development, following best practices in backend API development. Throughout the software development lifecycle, you will write code that is maintainable, clear, and concise. Your technical leadership will be crucial in mentoring team members to help them achieve their goals. Additionally, you will manage application deployment with a focus on security, scalability, and reliability. Your responsibilities will also include managing and evolving automated testing setups for backend and frontend applications to facilitate faster bug reporting and fixing. A solid understanding of RESTful API design, database design, and management, along with experience in version control systems, will be essential for this role. Strong problem-solving and communication skills, along with proficiency in object-oriented programming, C or VBNet, and writing reusable libraries are required. Familiarity with design and architectural patterns like Singleton and Factory patterns, RDBMS such as SQL, Postgres, MySQL, and writing clean, readable, and maintainable code will be beneficial. Experience in implementing automated testing platforms, unit tests, and identifying opportunities to optimize code and improve performance will be valuable assets. Understanding the best software engineering coding practices is essential for this position. Nice-to-have skills include proficiency in AWS services like EC2, S3, RDS, EKS, Lambda, CloudWatch, CloudFront, VPC, experience with Git, DevOps tools such as Jenkins, UCD, Kubernetes, ArgoCD, Splunk, and skills in NET-ReactJs.,
Posted 3 weeks ago
15.0 - 19.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Cloud Architect - AVP, you will be instrumental in defining and executing our AWS cloud strategy to ensure the effective deployment and administration of AWS cloud solutions. Your role will involve leading a team of AWS cloud engineers and architects, collaborating with diverse stakeholders, and utilizing your extensive expertise to promote AWS cloud adoption and innovation throughout the organization. Your primary responsibilities will include formulating and executing the company's AWS cloud strategy in alignment with business objectives, overseeing the design, architecture, and deployment of AWS cloud solutions with a focus on scalability, security, and reliability, collaborating with various teams to seamlessly integrate AWS services, evaluating and selecting appropriate AWS services and technologies, managing the migration of on-premises applications and infrastructure to AWS, establishing and enforcing AWS cloud governance, security policies, and best practices, providing technical leadership and guidance to the AWS cloud team to promote innovation and continuous enhancement, staying abreast of the latest AWS technologies and industry trends to incorporate relevant advancements into the AWS cloud strategy, and effectively communicating AWS cloud strategy, progress, and challenges to senior leadership and stakeholders. To qualify for this role, you should possess a Bachelor's or Master's degree in computer science, Information Technology, or a related field, along with a minimum of 15 years of IT experience, with at least 10 years dedicated to cloud architecture and implementation, particularly with AWS. Additionally, you should have experience with AWS cloud services SOC 2, ITIL, PCI-DSS, SAE16, ISO27001, Cobit, and/or HiTrust, cloud-native architectures, leading large-scale AWS cloud transformation projects, AWS cloud security, governance, and compliance, infrastructure as code (IaC) and automation tools such as AWS CloudFormation and Terraform, networking, storage, databases, and application development in AWS, exceptional problem-solving abilities, innovative design skills for AWS cloud solutions, strong leadership and communication capabilities, and a track record of managing and mentoring teams effectively. Preferred qualifications include being an AWS Certified Solutions Architect - Professional, experience with multi-cloud and hybrid cloud environments, familiarity with DevOps practices and tools like AWS CodePipeline and Jenkins, and knowledge of emerging technologies such as AI, ML, and IoT in relation to AWS cloud computing.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
You will be working as a skilled Senior Software Developer at Noise, a tech-driven connected lifestyle brand that aims to provide the latest gadgets and accessories to young Indian consumers. Founded in 2014 by Amit Khatri & Gaurav Khatri, Noise has established itself as India's No.1 wearable watch brand for 3 consecutive years and holds the No.3 global ranking in the same category. The company has received recognition through the prestigious Economic Times Startup Awards 2022 and boasts of brand ambassadors including Virat Kohli, Neeraj Chopra, Tapsee Panu, and others. As a Senior Software Developer at Noise, you will be required to have a minimum of 3 years of hands-on experience in Node.js. Strong knowledge of Data Structures (DS) and Algorithms, along with robust system design capabilities and familiarity with design patterns are essential for this role. Proficiency in AWS services, Git version control, and project management tools like Jira is a must. Previous experience in mentoring and leading junior developers, especially in a startup environment, will be advantageous. Your responsibilities will include designing, developing, testing, deploying, and maintaining software applications primarily using Node.js. You will be expected to optimize application performance by leveraging your understanding of Data Structures and Algorithms. Architecting scalable and resilient systems while adhering to design patterns to ensure code quality and maintainability will be crucial. Collaboration with cross-functional teams to define, design, and implement new features is a key aspect of this role. Additionally, as a Senior Software Developer at Noise, you will lead, mentor, and coach junior software developers to support their career growth and technical skills. Participation in code reviews, identification of areas for improvement, and advocating for best practices will be part of your responsibilities. Troubleshooting, root cause analysis, and prompt implementation of solutions are also expected. Staying updated with industry trends, technologies, and best practices is essential to excel in this role. The ideal candidate for this position should hold a Bachelor's degree in Computer Science, Engineering, or a related field. A minimum of 3 years of professional experience in software development using Node.js is required. Proficiency in Data Structures and Algorithms, system design, and scalable architecture patterns is essential. Experience with AWS services and cloud-based architectures, as well as proficiency in Git version control and agile project management tools like Jira, are necessary. A proven track record of leading and mentoring junior team members, along with excellent communication skills, will set you up for success as a "Noisemaker" at Noise.,
Posted 3 weeks ago
7.0 - 9.0 years
7 - 17 Lacs
Pune
Remote
Requirements for the candidate: The role will require deep knowledge of data engineering techniques to create data pipelines and build data assets. At least 4+ years of Strong hands on programming experience with Pyspark / Python / Boto3 including Python Frameworks, libraries according to python best practices. Strong experience in code optimization using spark SQL and pyspark. Understanding of Code versioning, Git repository, JFrog Artifactory. AWS Architecture knowledge specially on S3, EC2, Lambda, Redshift, CloudFormation etc and able to explain benefits of each Code Refactorization of Legacy Codebase: Clean, modernize, improve readability and maintainability. Unit Tests/TDD: Write tests before code, ensure functionality, catch bugs early. Fixing Difficult Bugs: Debug complex code, isolate issues, resolve performance, concurrency, or logic flaws.
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have proven experience as a Linux Systems Administrator, focusing on HPC environments. Your understanding of Linux operating systems such as CentOS, Ubuntu, and Red Hat should be strong. You should also have intermediate knowledge in SLURM resource scheduler. Hands-on experience with AWS services related to HPC like EC2, S3, FSx for Lustre, AWS Batch, and AWS ParallelCluster is required. Familiarity with parallel file systems like Lustre, GPFS, and network storage solutions is essential. Knowledge of GPU computing and working with GPU-enabled HPC systems on AWS is a plus. Experience with configuration management tools such as Ansible, Puppet, and Chef is desired. Moreover, experience with cloud-based HPC solutions and hybrid HPC environments will be beneficial for this role.,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
Data Scientist (5+ Years of Experience) We are seeking a highly motivated Data Scientist with over 5 years of hands-on experience in data mining, statistical analysis, and developing high-quality machine learning models. The ideal candidate will have a passion for solving real-world problems using data-driven approaches and possess strong technical expertise across various data science domains. Key Responsibilities: Apply advanced data mining techniques and statistical analysis to extract actionable insights. Design, develop, and deploy robust machine learning models to address complex business challenges. Conduct A/B and multivariate experiments to evaluate model performance and optimize outcomes. Monitor, analyze, and enhance the performance of machine learning models post-deployment. Collaborate cross-functionally to build customer cohorts for CRM campaigns and conduct market basket analysis. Stay updated with state-of-the-art techniques in NLP, particularly within the e-commerce domain. Required Skills & Qualifications: Programming & Tools: Proficient in Python, PySpark, and SQL for data manipulation and analysis. Machine Learning & AI: Strong experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and expertise in NLP, Computer Vision, Recommender Systems, and Optimization techniques. Cloud & Big Data: Hands-on experience with AWS services, including Glue, EKS, S3, SageMaker, and Redshift. Model Deployment: Experience deploying pre-trained models from platforms like Hugging Face and AWS Bedrock. DevOps & MLOps: Understanding of Git, Docker, CI/CD pipelines, and deploying models with frameworks such as FastAPI. Advanced NLP: Experience in building, retraining, and optimizing NLP models for diverse use cases. Preferred Qualifications: Strong research mindset with a keen interest in exploring new data science methodologies. Background in e-commerce analytics is a plus. If youre passionate about leveraging data to drive impactful business decisions and thrive in a dynamic environment, wed love to hear from you!,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As a DevOps Engineer or AWS Cloud Engineer, you will be responsible for setting up AWS Infrastructure using Terraform Enterprise and Concourse (CI/CD) Pipeline. Your role will involve configuring and managing various tools and infrastructure components, with a focus on automation wherever possible. You will also be troubleshooting code issues, managing databases such as PostgreSQL, DynamoDB, and Glue, and working with different cloud services. Your responsibilities will include striving for continuous improvement by implementing continuous integration, continuous development, and constant deployment pipelines. Additionally, you will be involved in incident management and root cause analysis of AWS-related issues. To excel in this role, you should have a Master of Science degree in Computer Science, Computer Engineering, or a relevant field. You must have prior work experience as a DevOps Engineer or AWS Cloud Engineer, with a strong understanding of Terraform, Terraform Enterprise, and AWS infrastructure. Proficiency in AWS services, Python, PySpark, and Agile Methodology is essential. Experience working with databases such as PostgreSQL, DynamoDB, and Glue will be advantageous. If you are passionate about building scalable and reliable infrastructure on AWS, automating processes, and continuously improving systems, this role offers a challenging yet rewarding opportunity to contribute to the success of the organization.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
jaipur, rajasthan
On-site
Job Description Kogta Financial Ltd is seeking an experienced and highly skilled ETL & Data Warehouse Developer with expertise in utilizing AWS services to join our dynamic team. As a key member of our data engineering team, you will be responsible for designing, developing, and optimizing ETL processes and data warehousing solutions on the AWS platform. The ideal candidate should have a solid background in ETL development, data modeling, and a deep understanding of AWS services and hands-on experience in crafting complex SQL queries and optimizing data workflows. Responsibilities ETL Development : Design, develop, and implement robust ETL processes using AWS Glue, AWS Data Pipeline, or custom scripts as needed. Ensure the efficient extraction, transformation, and loading of data from diverse sources into our data warehouse. Data Warehousing Design and maintain data warehouse solutions on AWS, with a focus on scalability, performance, and reliability. Implement and optimize data models for efficient storage and retrieval in AWS Redshift. AWS Service Utilization Leverage AWS services such as S3, Lambda, Glue, Redshift, and others to build end-to-end data solutions. Stay abreast of AWS developments and recommend the adoption of new services to enhance our data architecture. SQL Expertise Craft complex SQL queries to support data analysis, reporting, and business intelligence requirements. Optimize SQL code for performance and efficiency, and troubleshoot any issues related to data retrieval. Performance Optimization Optimize ETL workflows and data warehouse queries to ensure optimal performance. Identify and resolve bottlenecks in data processing and storage. Data Integration Collaborate with cross-functional teams to integrate data from various sources into the data warehouse. Work closely with business stakeholders to understand data requirements. Security And Compliance Implement and maintain security measures to protect data integrity and ensure compliance with industry standards and regulations. Collaborate with the security and compliance teams to implement best practices. Documentation Document ETL processes, data models, and system configurations for future reference. Ensure comprehensive documentation of the developed solutions for knowledge transfer. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as an ETL & Data Warehouse Developer with a focus on AWS services and SQL expertise. Strong proficiency in SQL, stored procedures, and views. In-depth understanding of AWS services related to data processing and storage. Experience with data modeling and designing efficient data warehouses. Familiarity with best practices in data security, compliance, and governance. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Certification in AWS or relevant technologies. (ref:hirist.tech),
Posted 3 weeks ago
4.0 - 7.0 years
24 - 40 Lacs
Hyderabad
Work from Office
Design and optimize scalable data pipelines using Python, Scala, and SQL. Work with AWS services, Redshift, Terraform, Docker, and Jenkins. Implement CI/CD, manage infrastructure as code, and ensure efficient data flow across systems.
Posted 3 weeks ago
4.0 - 6.0 years
10 - 20 Lacs
Chennai
Work from Office
Role & responsibilities Role : DevOps Engineer DevOps Engineer Job Description: DevOps Engineer (4+ Years of Experience) We are looking for a DevOps Engineer with 4+ years of experience to join our dynamic team. The ideal candidate will have hands-on experience with AWS services, Docker, Kubernetes, and Jenkins, along with a strong understanding of CI/CD pipelines and infrastructure automation. Relevant course completion is mandatory, and certifications in related fields are a plus. Key Responsibilities: Design, implement, and manage scalable and reliable cloud infrastructure using AWS services. Develop and maintain CI/CD pipelines using Jenkins to support continuous integration and deployment. Containerize applications using Docker and orchestrate them with Kubernetes. Monitor, troubleshoot, and optimize system performance to ensure high availability and scalability. Collaborate with development and operations teams to improve deployment workflows and infrastructure automation. Implement security best practices for cloud and container environments. Maintain and update documentation for infrastructure, processes, and configurations. Requirements: Experience: 2+ years in DevOps or related roles. Technical Skills: Hands-on experience with AWS services (e.g., EC2, S3, RDS, CloudFormation, Lambda). Strong understanding and practical knowledge of Docker and containerization. Experience with Kubernetes for container orchestration. Proficiency in using Jenkins for CI/CD pipeline creation and management. Familiarity with Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Basic scripting knowledge (e.g., Bash, Python, or PowerShell). Familiarity with version control systems like Git. Strong problem-solving skills and attention to detail. Excellent communication and collaboration abilities. Preferred Qualifications: Relevant certifications in AWS or Kubernetes. Understanding of monitoring tools like Prometheus, Grafana, or CloudWatch. Experience in setting up logging systems (e.g., ELK Stack) If any Candidate interested, please share the CV in madhumithak@sightspectrum.in Preferred candidate profile
Posted 3 weeks ago
2.0 - 5.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Roles & Responsibilities: Take ownership of architecture design and development of scalable and distributed software systems. Translate business to technical requirements Oversee technical execution, ensuring code quality, adherence to deadlines, and efficient resource allocation Data driven decision making skills with focus on achieving product goals Design and develop data ingestion and processing pipelines capable of handling largescale events. Responsible for the complete software development lifecycle, including requirements analysis, design, coding, testing, and deployment. Utilize AWS services/ Azure services like IAM, Monitoring, Load Balancing, Autoscaling, Database, Networking, storage, ECR, AKS, ACR etc. Implement DevOps practices using tools like Docker, Kubernetes to ensure continuous integration and delivery. Develop DevOps scripts for automation and monitoring. Collaborate with cross-functional teams, conduct code reviews, and provide guidance on software design and best practices. Qualifications: Bachelors degree in computer science, Information Technology, or a related field (or equivalent work experience). 2-5 years of experience in software development, with relevant work experience. Strong coding skills with proficiency in Python, Java, or C++. Experience with API frameworks both stateless and stateful such as FastAPI, Django, Spring, Spring Boot. Proficient in cloud platforms, specifically AWS, Azure, or GCP. Hands-on experience with DevOps tools including Docker, Kubernetes, and AWS services. Strong understanding of scalable application design principles and experience with security best practices and compliance with privacy regulations. Good knowledge of software engineering practices like version control (GIT), DevOps (Azure DevOps preferred) and Agile or Scrum. Strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. Experience of SDLC and best practices while development Preferred experience with AI/ML based product development. Experience with Agile methodology for continuous product development and delivery Why you might want to join us: Going to be part of shaping one of the most exciting AI companies Opportunity to learn from peer group including experts in AI, computer vision & robotics to Data Engineering and System Engineering Sharp, motivated co-workers in a fun office environment Our motto: Put employees first. We only succeed when our employees succeed. Think Big. Be ambitious and have audacious goals. Aim for excellence. Quality and excellence count in everything we do. Own it and get it done. Results matter! Embrace each others differences
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a highly experienced and motivated Backend Solution Architect, you will be responsible for leading the design and implementation of robust, scalable, and secure backend systems. Your expertise in Node.js and exposure to Python will be crucial in architecting end-to-end backend solutions using microservices and serverless frameworks. You will play a key role in ensuring scalability, maintainability, and security, while also driving innovation through the integration of emerging technologies like AI/ML. Your primary responsibilities will include designing and optimizing backend architecture, managing AWS-based cloud solutions, integrating AI/ML components, containerizing applications, setting up CI/CD pipelines, designing and optimizing databases, implementing security best practices, developing APIs, monitoring system performance, and providing technical leadership and collaboration with cross-functional teams. To be successful in this role, you should have at least 8 years of backend development experience with a minimum of 4 years as a Solution/Technical Architect. Your expertise in Node.js, AWS services, microservices, event-driven architectures, Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development will be critical. Additionally, hands-on experience with AI/ML workflows, React, Next.js, Angular, and AWS Solution Architect Certification will be advantageous. At TechAhead, a global digital transformation company, you will have the opportunity to work on cutting-edge AI-first product design thinking and bespoke development solutions. By joining our team, you will contribute to shaping the future of digital innovation worldwide and driving impactful results with advanced AI tools and strategies.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As an Engineering Leader at Crop.photo, you will play a crucial role in shaping the future of brand consistency through AI technology. With a focus on building a high-performing engineering team, you will not only lead by example through hands-on coding but also provide guidance to ensure the success of our projects. Your responsibilities will encompass a wide range of tasks, from architecting and developing our AWS-based microservices infrastructure to collaborating with product management on technical decision-making. You will be at the forefront of backend development using Java, Node.js, and Python within the AWS ecosystem, while also contributing to frontend development using React and TypeScript when necessary. Your expertise will be essential in designing and implementing scalable AI/ML pipeline architectures, establishing engineering best practices, and mentoring junior engineers to foster a culture of engineering excellence. Additionally, you will be responsible for system reliability, performance optimization, and cost management, ensuring that our platform delivers high-quality solutions for our marketing professionals. To excel in this role, you must have a minimum of 8+ years of software engineering experience, including at least 3 years of experience leading engineering teams. Your technical skills should cover a wide range of AWS services, backend development, frontend development, system design & architecture, as well as leadership & communication. Your ability to drive architectural decisions, identify technical debt, and lead initiatives to address it will be key to the success of our projects. Working at Crop.photo will provide you with the opportunity to take true technical ownership of a rapidly growing AI platform, shape architecture from an early stage, work with cutting-edge AI/ML technologies, and have a direct impact on product direction and engineering culture. Your success in this role will be measured by the implementation of scalable, maintainable architecture, reduction in system latency and processing costs, successful delivery of key technical initiatives, team growth, and engineering velocity improvements, as well as system reliability and uptime metrics. If you are passionate about building scalable systems, have a proven track record of technical leadership, and thrive in an early-stage environment where you can make a significant impact on both technology and team culture, we encourage you to apply for this exciting opportunity at Crop.photo.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
This is a full-time on-site role for a PHP Laravel Developer based in Chennai. In this position, you will play a key role in developing and maintaining web applications utilizing the Laravel framework. Your responsibilities will include coding, debugging, testing, and deploying new features. Additionally, you will collaborate with cross-functional teams to create efficient and scalable solutions. To excel in this role, you must possess a strong proficiency in PHP and have hands-on experience with the Laravel framework. Familiarity with frontend technologies like HTML, CSS, and JavaScript is essential. Moreover, knowledge of database management systems, particularly MySQL, is required. Understanding RESTful APIs, integrating third-party services, and using version control systems like Git are also important aspects of this position. Candidates should have practical experience in schema design, query optimization, REST API, and AWS services such as EC2, S3, RDS, Lambda, and Redis. Proficiency in designing scalable and secure web applications, expertise in automated testing frameworks, and a solid grasp of web security practices are crucial for success in this role. The ideal candidate will be able to prioritize tasks effectively and work both independently and collaboratively as part of a team. Strong problem-solving and troubleshooting skills are essential, as is clear communication and the ability to work with others. A Bachelor's degree in computer science or a related field, or equivalent experience, is required. Requirements: - Strong proficiency in PHP with Laravel framework - Experience in HTML, CSS, and JavaScript - Knowledge of MySQL and RESTful APIs - Familiarity with Git and version control systems - Hands-on experience with schema design, query optimization, and REST API - Profound knowledge of AWS services - Demonstrated experience in designing scalable and secure web applications - Expertise in automated testing frameworks - Strong understanding of web security practices - Ability to prioritize tasks and work independently or as part of a team - Excellent problem-solving and troubleshooting skills - Good communication and collaboration skills - Bachelor's degree or equivalent experience in computer science or related field Experience: 4+ Years Location: Chennai/Madurai Interested candidates can share CV at anushya.a@extendotech.com / 6374472538 Job Type: Full-time Benefits: Health insurance, Provident Fund Location Type: In-person Schedule: Morning shift Work Location: In person,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Backend Developer at Runo, you will be part of a product-based, funded company operating from Cyber Towers in HiTech City, Hyderabad, India. Runo specializes in Call Management CRM solutions for sales teams globally, aiming to scale the engineering team to support user acquisition growth in the global market while maintaining success in the Indian market. Your role will involve scaling the existing architecture to handle billions of requests monthly, with a current scale of 100 million requests per month. You will contribute to architecting the database/reporting layer, enabling support for complex queries and data exports of millions of records. As a lead developer, you will drive the development of new software products and enhancements to existing products. The ideal candidate for this position should have proficiency in NodeJS, expert knowledge of NoSQL Databases (preferably MongoDB), experience in handling products with over a million requests per month, and possess 5-8 years of relevant professional work experience. A self-motivated individual who can work independently and is committed to delivering high-quality results would be an excellent fit for this role. Your responsibilities will include contributing to the product architecture to handle millions of requests daily, optimizing the architecture for regional data zones, developing high-performance, reusable, and bug-free APIs, and optimizing existing APIs for performance through data denormalization and decoupling long-running APIs using queues. Additionally, familiarity with AWS Services like API Gateway, Lambdas, SNS, SQS, and SES will be beneficial. At Runo, we offer a competitive salary, ESOPs, and medical insurance as part of our comprehensive benefits package.,
Posted 3 weeks ago
8.0 - 12.0 years
0 - 0 Lacs
pune, maharashtra
On-site
At BMC, trust is not just a word - it's a way of life! We are an award-winning, equal opportunity, culturally diverse, and fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities as we believe that you bring your best every day. We champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! In our IS&T (Information Services and Technology) department, we provide all the necessary technology and operational support services to run our business here at BMC. We have over 200 servers on premises supporting production, disaster recovery, databases, applications, and over 1000 servers in the Lab environment. IS&T plays a transformational role not only for BMC but also for enhancing the customer experience. Our cutting-edge technologies manage BMC's infrastructure and showcase it to the customers through the program called BMC on BMC. We are looking for an experienced Lead Quality Engineer to establish and drive the quality strategy for our Generative AI practice. In this role, you will ensure quality across our AI-powered solutions and custom-built enterprise applications, focusing on end-to-end testing of integrated systems. Your contributions to BMC's and your own success will be significant in multiple ways: - Strategic Leadership: - Lead quality strategy for AI and AI-driven custom applications - Establish testing standards and frameworks - Drive continuous improvement in testing practices - Build and mentor a quality engineering team - Collaborate with cross-functional teams - Technical Quality Management: - Design comprehensive test strategies for full-stack applications - Establish quality metrics and acceptance criteria - Lead implementation of automated testing solutions - Ensure security and performance testing coverage - Drive quality gates and metrics implementation - Key Competencies: - Technical Leadership - Testing strategy development - Framework architecture - Quality metrics definition - Team mentoring - Process improvement This role requires a balance of technical expertise and strategic leadership, ensuring quality across AI implementations and custom applications while building a strong quality engineering practice. **Core Technical Expertise (Must Have):** - Modern Testing Frameworks & Tools - Python testing (pytest, unittest) - JavaScript/TypeScript testing - API automation (Postman, RestAssured) - UI testing (Selenium, Cypress) - Performance testing (JMeter, K6) - CI/CD integration (Jenkins, GitHub, BitBucket) - Cloud & Infrastructure Testing: - AWS services (EKS, RDS, Bedrock) - Kubernetes deployments - Microservices architecture - Database testing (PostgreSQL) - Authentication systems (OKTA) - AI/ML Testing Expertise: - LLM integration testing - Prompt engineering validation - Model output verification - Vector database testing - Custom AI workflow validation **Domain Knowledge:** - Enterprise application testing - Business process validation - Integration testing patterns - Security testing principles - Performance optimization - Data integrity validation **Required Experience:** - 8+ years in quality engineering - 3+ years in leadership roles - Strong background in enterprise applications - Experience with AI/ML systems - Cloud-native application testing - Excellent Communication skills At BMC, our culture revolves around our people. With 6000+ brilliant minds working together globally, you won't just be known by your employee number but for your true authentic self. BMC allows you to be YOU! If you're unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply. We aim to attract talents from diverse backgrounds and experiences to ensure we face the world together with the best ideas. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country-specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices.,
Posted 3 weeks ago
5.0 - 10.0 years
30 - 32 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
The Data Engineer is responsible for designing,building,and maintaining scalable data pipelines, data lakes,and datawarehouses.Collaborate closely with cross-functional teams to understand data requirements.
Posted 3 weeks ago
3.0 - 6.0 years
13 - 23 Lacs
Bengaluru
Work from Office
Golang with Vuejs or Golang with Golang DATABASE • MongoDB • MySL • PostgreSL • Maria DB MESSAGE TOOLS • Apache Kafka • RabbitMQ CLOUDTECHNOLOGIES • AWS (EC2, S3, RDS, LAMBDA, API GATEWAY, SS & SNS) CONTAINER PLATFORMS • DOCKER • EC2 VERSION CONTROL TOOLS • GIT • BITBUCKET CI TOOLS • JENKINS • AWSCODEPIPELINE&CODE BUILD
Posted 3 weeks ago
1.0 - 2.0 years
3 - 7 Lacs
Mohali
Work from Office
Manage AWS (EC2, S3, RDS), CI/CD pipelines, Bunny.net CDN. Ensure performance, uptime, and cost optimization. Set up alerts, logs, and backups for reliability. Analyze AWS usage for efficient cloud cost control.
Posted 3 weeks ago
6.0 - 10.0 years
22 - 25 Lacs
Bengaluru
Work from Office
Our engineering team is looking for a Data Engineer who is very proficient in python, has a very good understanding of AWS cloud computing, ETL Pipelines and demonstrates proficiency with SQL and relational database concepts. In this role you will be a very mid-to senior-level individual contributor guiding our migration efforts by serving as a Senior data engineer working closely with the Data architects to evaluate best-fit solutions and processes for our team. You will work with the rest of the team as we move away from legacy tech and introduce new tools and ETL pipeline solutions. You will collaborate with subject matter experts, data architects, informaticists and data scientists to evolve our current cloud based ETL to the next Generation. Responsibilities Independently prototypes/develop data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and manages any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensure data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment. Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment. Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Masters in same or related disciplines strongly preferred. 3+ years experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 3+ years experience as developer working in an AWS cloud computing environment. 3+ years experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred. Strong written and oral communication skills are required. Experience in the healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus
Posted 3 weeks ago
7.0 - 10.0 years
15 - 19 Lacs
Bengaluru
Hybrid
Job Title] Sterling Integrator Specialist [Project Details]: Integration platform hosted on AWS integrating with Internal Systems (including ERP/Finance/Planning/Commerce) and 150+ Trading Partners (including Retailers, 3PLs, Banks, Suppliers) across ANZ, EU and US. [Technology and Sub-technology] IBM Sterling B2B Integrator, ITX, ITXA, SSP, ICC, LW, AWS Services [Base Location]: Bangalore [Type]: Hybrid [Qualifications] 6+ years exp in Sterling Suite of Integration Products [Job Overview]: You will work closely with development teams, operations teams, and leadership inside the org and outside the organization to design and build Interfaces to empower enterprise to Integrate SCM ecosystem. We look for problem solvers, who can intuitively anticipate problems; look beyond immediate issues; and take initiative to tackle the most interesting challenges and diverse technical challenges. We are open to adoption of the latest tools and techniques for solving the complex problems. Primary Skills: Excellent Communication, logical and analytical skills along with Stakeholder management Willingness to support in shifts Must possess sound knowledge on EAI/B2B architecture and Design methodologies Minimum of 6 years’ experience with IBM Sterling Integrator Extensive knowledge on IBM Sterling Secure Proxy and IBM Control centre Extensive IBM Sterling Integrator Map development and Business process development skills with hands on SI Admin Tasks (Must) Extensive knowledge on EDI standards (EDIFACT, X12, TRADACOMS,..), XML, IDOC and other flat file formats Good Understanding of JDBC, SAP Integration, API and SOAP (Must) Good experience on System operations and Performance tuning Server management, Installation, and patching knowledge/ experience w.r.t Sterling Suite of products Extensive knowledge on SQL queries Good knowledge on PGP/ Encryption concepts Provide technical support for all B2B transactions Understanding of Trading partner setup that require communication protocol setup like AS2, VAN, SFTP, HTTP/s, CD, MQ. Strong UNIX Scripting skills (Must) Fair knowledge on Sterling File gateway, ITX/A, MQ and Lightwell Framework Role Technical Ownership of the Integration Platform • Conduct research on new technologies. Lead/contribute the effort of solving complex technical challenges to improve engineering productivity • Drive/contribute to standard methodology of improving code quality, performance, and security compliance. • Platform Architecture & Administration: Best Practices & Guidelines, Patch & Fix management, Capacity Planning, Performance Tuning, Continuous improvement & Automations, Transport management & Version Control • Implementation: Trading Partner onboarding – EDI/Non-EDI, New Integration design & development. Change Request implementation, Enhancements, Migrations • BAU: Ensuring Business Continuity, Monitoring –Integration components, DB, IO, CPU & Memory, Troubleshooting, root cause analysis, reprocessing and reconciliations [Good to have Skills]: Exposure on AWS and other integration applications would add value Infrastructure nomenclature, Networking, Database, Dockers, and Containers Effective communications, presentation, organizational and planning skills Willingness to take on new challenges and environments on a frequent basis Willingness to upskill on other Integration Tools like MuleSoft, Informatica on Cloud (IOC),... Self-driven and willingness to work individually as well as Tech lead as per requirements Coding Skills (Java, Python, Shell,) Role & responsibilities Preferred candidate profile
Posted 4 weeks ago
2.0 - 3.0 years
7 - 9 Lacs
Bengaluru
Work from Office
Job Description: Are you a data enthusiast with a passion for transforming raw data into actionable insights? We are seeking a talented Data Scientist to join our dynamic team. As a Data Scientist at Intellimind, you will play a pivotal role in leveraging AI, ML, and big data analytics to drive data-driven decision-making and generate tangible business value. Key Responsibilities: Manage, architect, and analyze large datasets to create data-driven insights and high-impact models that drive business growth. Develop predictive statistical models and utilize mathematical tools and techniques to solve complex business problems. Translate business challenges into specific requirements, identify relevant data sources, and deliver analytic solutions. Implement new predictive and prescriptive solutions aligned with business needs and requirements. Lead the development and delivery of large-scale programs that integrate processes with technology to enhance performance. Take ownership of end-to-end big data solutions, including data acquisition, storage, transformation, and analysis. Manage specific project areas or entire projects, working as both an individual contributor and overseeing small work efforts. Work autonomously with minimal supervision, demonstrating strong problem-solving skills. Qualifications & Skills: Bachelors or master’s degree in a relevant field from a well-known top university. 2-3 years of experience in data science and analytics, including AI and ML projects. Proficiency in statistical techniques and machine learning algorithms. Hands-on experience with analysis tools such as R and Python. Understanding of Text Analysis and Natural Language Processing (NLP). Knowledge of AWS Platform. Proficiency in VBA and visualization tools like PowerBI and Quicksight Solid understanding of statistical modeling, machine learning algorithms, and data mining concepts. Advanced proficiency in Excel and PowerPoint. Exceptional written and oral communication skills. Strong interpersonal skills and the ability to collaborate across diverse teams and cultures. Done several projects as related to Data Scientist If you are a data-driven problem solver with a passion for transforming data into meaningful insights, we invite you to join our team and contribute to our mission of delivering high-impact data solutions that provide a competitive advantage for our stakeholders.
Posted 4 weeks ago
5.0 - 10.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. FTE Location: Hyderabad Work Mode: WFO JD DevOps Engineering (EXP: 5 to 10yrs ) Provisioned and secured cloud infrastructure using Terraform/ AWS CloudFormation Fully automated GitLab CI/CD pipelines for application builds, tests, and deployment, integrated with Docker containers and AWS ECS/EKS Continuous integration workflows with automated security checks, testing, and performance validation A self-service developer portal providing access to system health, deployment status, logs, and documentation for seamless developer experience AWS CloudWatch Dashboards and CloudWatch Alarms for real-time monitoring of system health, performance, and availability Centralized logging via CloudWatch Logs for application performance and troubleshooting Complete documentation for all automated systems, infrastructure code, CI/CD pipelines, and monitoring setups Monitoring - Splunk - Ability to create dashboards, alerts, integrating with tools like MS teams. DevOps Engineering Master's or bachelor's degree in computer science/IT or equivalent Expertise in Shell scripting Familiarity with Operating system - Windows & linux Experience in Git - version control Ansible - Good to have Familiarity with CI/CD pipelines - GitLab Docker, Kubernetes, OpenShift - Strong in Kubernetes administration Experience in Infra As Code Terraform & AWS - CloudFormation Familiarity in AWS services like EC2, Lambda, Fargate, VPC, S3, ECS, EKS Nice to have – Familiarity with observability and monitoring tools like Open Telemetry setup, Grafana, ELK stack, Prometheus Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate joiners at most. All the best
Posted 4 weeks ago
3.0 - 8.0 years
35 - 60 Lacs
Hyderabad
Remote
Employment Details: Type: Full-time (outcome-focused, GSD mindset) Shift: US shift (7:00 PM 6:00 AM IST) Location: Remote (India) Compensation: INR 36-60 LPA Start Date: Immediate About Us: We are a two-time IPO founding team building the next big thing in CRM—a highly intelligent, AI-driven contact manager and productivity tool designed for sales and relationship professionals. Our product is near completion, and we’re now seeking top-tier senior full-stack engineers to bring it across the finish line and make it live. Role Overview: As a Senior Full Stack Engineer, you will own critical product features end-to-end, work closely with our founding team, and ensure a seamless launch. You are a product ownership fanatic with a bias for action—someone who can get shit done. You’ll architect, build, and maintain scalable systems on AWS, design intuitive front-end experiences in React, and implement robust back-end services in Python and JavaScript. You will also leverage GPT and other AI models to power our prompt-engineering features. As a Senior Full Stack Engineer, you will own critical product features end-to-end, work closely with our founding team, and ensure a seamless launch. You’ll architect, build, and maintain scalable systems on AWS, design intuitive front-end experiences in React, and implement robust back-end services in Python and JavaScript. You will also leverage GPT and other AI models to power our prompt-engineering features. Key Responsibilities: Product Ownership & Delivery: Take product from near-completion to production launch. Responsible for reliability, performance, and security. Front-end Development: Build and optimize interactive UIs in React. Collaborate on design, ensure responsiveness, and implement best practices. Back-end Development: Design and develop RESTful APIs and services in Python (Django/Flask or equivalent) and JavaScript (Node.js). Ensure clean architecture and maintainable code. AWS & DevOps: Architect, deploy, and monitor infrastructure on AWS (EC2, ECS/EKS, Lambda, RDS, S3, CloudWatch). Implement CI/CD pipelines for continuous delivery. AI Prompt Engineering: Integrate GPT/Perp models for advanced AI-driven features, fine-tune prompts, and optimize model performance. Collaboration & Mentorship: Work closely with founders, product managers, and QA teams to define technical requirements. Mentor junior engineers and drive code quality. Must-Have Qualifications: Product ownership fanatic with a bias for action—proven ability to execute and deliver under tight deadlines. 3–10 years of professional experience in full-stack development. Proven track record delivering production-grade SaaS products, ideally in CRM, productivity, or sales domains. Strong proficiency in React and modern JavaScript/TypeScript. Deep experience with Python frameworks (Django, Flask, FastAPI) and JavaScript back-ends (Node.js, Express). Hands-on expertise with AWS services and DevOps best practices (IaC, Docker, Kubernetes, CI/CD). Experience integrating and fine-tuning GPT-like models or similar AI/NLP solutions. - Excellent problem-solving skills and attention to detail. Strong verbal and written English communication. 3–10 years of professional experience in full-stack development. Proven track record delivering production-grade SaaS products, ideally in CRM, productivity, or sales domains. Strong proficiency in React and modern JavaScript/TypeScript. Deep experience with Python frameworks (Django, Flask, FastAPI) and JavaScript back-ends (Node.js, Express). Hands-on expertise with AWS services and DevOps best practices (IaC, Docker, Kubernetes, CI/CD). Experience integrating and fine-tuning GPT-like models or similar AI/NLP solutions. Excellent problem-solving skills and attention to detail. Strong verbal and written English communication. Nice-to-Have: Prior experience in CRM, contact management, or sales productivity tools. Familiarity with GraphQL, Redis, and Elasticsearch. Experience with data analytics or machine learning pipelines. Screening Process: Application Form: 5 minutes Technical Assessment: 90 minutes English Proficiency Test: 90 minutes Technical Interview: 15 minutes Founder/Client Interview: 30 minutes Why Join Us? Work directly with a 2x IPO founding team on a ground-breaking CRM platform. High ownership and autonomy from day one Fast-paced environment with clear impact on product success Opportunity to shape the future of sales productivity with AI
Posted 1 month ago
7.0 - 12.0 years
25 - 40 Lacs
Gurugram
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough