Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
40 - 50 Lacs
bengaluru
Work from Office
About Simplilearn: Simplilearn.com is the world's #1 online bootcamp providing digital skills training to help individuals acquire the skills they need to thrive in the digital economy. We provide rigorous online training in disciplines such as Cyber Security, Cloud Computing, Project Management, Digital Marketing, and Data Science, among others. In other words, we specialize in areas where technologies and best practices are changing rapidly, and the demand for qualified candidates significantly exceeds supply. Designed and continually updated by 2000+renowned industry and academic experts, we offer a choice of individual courses, comprehensive certification programs, and partnerships with some world-renowned universities, helping millions of professionals with work-ready skills they need to excel in their career, and thousands of organizations with their corporate training and employee upskilling needs. Our practical and applied approach has resulted in 85 percent of learners getting promotions or new jobs. Learn by doing with over 1,000 live classes each month, real-world projects, and more. Experience the new way of learning at Simplilearn. Title: DevOps Tech Lead / Architect Location: Bengaluru (HSR Layout) ============================= Duties and responsibilities Experience in building platforms and frameworks to create consistent, verifiable, and automatic management of applications and infrastructure between non-production and production environments Configure and deploy cloud-based applications and implement DevOps best practices Participate in software development lifecycle including infrastructure design and debugging required to achieve successful implementation Manage and build CI/CD pipelines for various code bases hosted on different source controls (Git/SVN) Deploy monitoring solutions for applications and leverage APM tools like NewRelic/Datadog Basic administration of MySQL servers Manage multi-account setup on AWS/Azure/GCP Assist in setting up a NOC team by setting up alarms/incident management processes using PagerDuty Understanding and executing best practices to manage and reduce security risk, and protect your networks and data Support/mentor team members Desired skills Experience with source control management (Git,SVN) Experience in IaC ( Terraform ). Minimum 5+ years of DevOps experience with at least 3 years experience in AWS Cloud Exposure to automation using Jenkins/Gitlab or AWS Code Developer Tools Experience with containerization using Docker (ECS will be a plus) Strong Scripting Skills (Bash, Python) Experience with monitoring tools (Cloudwatch, Nagios, Grafana, Zabbix) Experience managing Dev/Test environments. (Production environment will be a plus) Strong analytical skills. Good verbal and written communication skills
Posted 3 hours ago
6.0 - 8.0 years
20 - 25 Lacs
bengaluru
Work from Office
Role & responsibilities: Perform security monitoring of Pega Cloud commercial environments using multiple security tools/dashboards Perform security investigations to identify indicators of compromise (IOCs) and better protect Pega Cloud and our clients from unauthorized or malicious activity Actively contribute to incident response activities as we identify, contain, eradicate, and recover Contribute to standard operating procedure (SOP) and policy development for CSOC detection and analysis tools and methodologies Assist in enhancing security incident response plans, conducting thorough investigations, and recommending remediation measures to prevent future incidents. Perform threat hunts for adversarial activities within Pega Cloud to identify evidence of attacker presence that may have not been identified by existing detection mechanisms Assist the threat detection team in developing high confidence Splunk notables focused on use cases for known and emerging threats, based on hypotheses derived from the Pega threat landscape Assist in the development of dashboards, reports, and other non-alert based content to maintain and improve situational awareness of Pega Cloud's security posture Assist in the development of playbooks for use by analysts to investigate both high confidence and anomalous activity Preferred candidate profile: SANS, Offensive Security, or other top-tier industry recognized technical security certifications focused on analysis, detection, and/or incident response Industry recognition for identifying security gaps to secure applications or products What You've Accomplished: Minimum of 6+ years of industry-relevant experience, with a demonstrated working knowledge of cloud architecture, infrastructure, and resources, along with the associated services, threats, and mitigations. Minimum of 4+ years in operational SIEM (Security Information and Event Management) roles, focusing on analysis, investigations, and incident response, with experience in Google Chronicle SIEM being an added advantage. 3+ years of operational cloud security experience preferably AWS and/or GCP including knowledge and analysis of various cloud logs such as CloudTrail, Cloud Audit, GuardDuty, Security Command Center, CloudWatch, Cloud Ops, Trusted Advisor, Recommender, VPCFIow, and WAF logs. 4+ years of operational experience with EDR/XDR platforms and related analysis and response techniques Operational experience performing investigations and incident response within Linux and Windows hosts as well as AWS, GCP, and related Kubernetes environments (EKS/GKE) Solid working knowledge of MITRE ATT&CK framework and the associated TTP's and how to map detections against it, particularly the cloud matrix portion Familiarity with the OWASP Top 10 vulnerabilities and best practices for mitigating these security risks. A solid foundational understanding of computer, OS (Linux/Windows), and network architecture concepts, and various related exploits/attacks Experience developing standard operating procedures (SOPs), incident response plans, runbooks/playbooks for repeated actions, and security operations policies Experience with Python, Linux shell/bash, and PowerShell scripting is a plus Excellent verbal and written communication skills, including poise in high pressure situations A demonstrated ability to work in a team environment and foster a healthy, productive team culture A Bachelor's degree in Cybersecurity, Computer Science, Data Science, or related field
Posted 3 hours ago
7.0 - 12.0 years
15 - 20 Lacs
pune
Work from Office
We are looking for a Lead Software Engineer - AI/ML Engineer Youll make a difference by: Siemens is seeking a visionary and technically strong Lead AI/ML Engineer to spearhead the development of intelligent systems that power the future of sustainable and connected transportation. This role will lead the design and deployment of AI/ML solutions across domains such as efficiency improvements in software development process, predictive maintenance, traffic analytics, computer vision for rail safety, and intelligent automation in rolling stock and rail infrastructure. Key Responsibilities Lead the end-to-end lifecycle of AI/ML projectsfrom data acquisition and model development to deployment and monitoringwithin the context of mobility systems. Architect scalable ML pipelines that integrate with Siemens Mobility platforms and other edge/cloud-based systems. Collaborate with multi-functional teams including domain experts, software architects, and system engineers to translate mobility use cases into AI-driven solutions. Mentor junior engineers and data scientists, and foster a culture of innovation, quality, and continuous improvement. Evaluate and integrate innovative research in AI/ML, including generative AI, computer vision, and time-series forecasting, into real-world applications. Ensure compliance with Siemens AI ethics, cybersecurity, and data governance standards. Required Qualifications Bachelor's or Masters or PhD in Computer Science, Machine Learning, Data Science, or a related field. 7+ years of experience in AI/ML engineering, with at least 2 years in a technical leadership role. Strong programming skills in Python and experience with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proven experience deploying ML models in production, preferably in industrial or mobility environments. Familiarity with MLOps tools (e.g., MLflow, Kubeflow) and cloud platforms (Azure, AWS, or GCP). Solid understanding of data engineering, model versioning, and CI/CD for ML. Preferred Qualifications Experience in transportation, automotive, or industrial automation domains. Knowledge of edge AI deployment, sensor fusion, or real-time analytics. Contributions to open-source AI/ML projects or published research. What We Offer Opportunity to shape the future of mobility through AI innovation. Access to Siemens global network of experts, labs, and digital platforms, flexible work arrangements, and continuous learning opportunities. A mission-driven environment focused on sustainability, safety, and digital transformation. Desired Skills: 9+ years of experience is required. Great Communication skills. Analytical and problem-solving skills
Posted 3 hours ago
2.0 - 7.0 years
9 - 13 Lacs
pune
Work from Office
Youll make a difference by: Having proficiency in Golang. Having ability to understand and analyze business requirements by interacting with relevant stake holders. Having experience in developing efficient software design by applying design principles. Ability to Write automated unit tests and integration tests for the implemented features. Ensuring conformance to quality processes to help project in meeting quality goals. Ability to effectively investigate reported software defects, debugging skills. Proactive interaction with product owner and architects for technical clarifications and presentations. Having experience with AWS is must to have. Youll win us over by: Having An engineering degree B.E/B.Tech/MCA/M.Tech/M.Sc with good academic record. 4-6 years of demonstrable experience in software development. Proactive interaction with product owner and architects for technical clarifications and presentations. Communicate clearly and effectively at various levels - intra team, inter group, spoken skills, written skills - including email, presentation and articulation skills. Understanding of Version Control Systems like GIT. Docker knowledge and cloud infrastructure knowledge will be an added advantage. Exposure to Building Automation domain would be an added advantage. Being a good team player
Posted 3 hours ago
2.0 - 7.0 years
15 - 19 Lacs
gurugram
Work from Office
About the Team The global engineering team is responsible for the development of reference auxiliaries and adaptation to customer requirements for large gas & steam turbines and generator packages. The Tools and Data Management (TDM) department is represented in Germany, India and USA and is administrating and further developing the engineering tool landscape including its data. To improve efficiency and daily work experience of the engineering teams, we focus on: Easy user interfaces, seamless flow of data between tools, excellent tool performance, simplification of the tool landscape and the application of effective techniques to evaluate, interpret and visualize data. A Snapshot of Your Day In the morning, you will continue reviewing ideas from the GCO team to identify business processes where generative language AI solutions can add value. You will engage with idea providers to understand their motivations and details better. After evaluating the input, you will refine your shortlist of potential projects. Before lunch, you will join the Siemens Energy (SE) AI community meeting to stay updated on the latest developments in the field. After lunch, you will present AI techniques, opportunities, and limitations to the Bid Management team at GCO. You will highlight promising ideas from your shortlist and initiate discussions about which ideas should be prioritized for implementation. In the afternoon, you'll collaborate with colleagues to discuss new ideas and the necessary architecture for data sources, APIs, and data lakes. You will end your day by summarizing progress and planning the next steps. Then, its time and you call it a day. oriented: Drive the development and implementation of advanced data analytics solutions with focus on natural language processing NLP and ML techniques Support the implementation of data driven project execution methodologies across the engineering, bid management, project management and procurement disciplines Coordinate data integration and analysis requirements as well as solutions with IT Define required data integration and analysis capabilities for the organization - train the team Join the expert network across different business areas, including IT and stay up-to-date on latest data science and ML techniques Ensure effective communication with project stakeholders. We dont need superheroes, just super minds: Masters degree in computer science, data science, applied mathematics, statistics or comparable engineering/IT discipline Minimum 2 years of experience in developing AI applications and integration of AI with existing applications Enthusiastic about unleashing the full potential of data, data analytics and AI/ML solutions in operative business processes Advanced programming skills with Python and other relevant programming languages (e.g., Java, JavaScript) Proficiency with database management systems (e.g., SQL and NoSQL) Experience with cloud platforms (e.g., AWS, Azure and GCP). Excellent team player in global interdisciplinary teams Experience working within Agile or Scrum frameworks Open mindset to new technologies and enthusiastic to continuously learn Analytical mindset and a systematic style of working Fluent English is a must - excellent communication and presentation skills
Posted 3 hours ago
2.0 - 4.0 years
0 - 0 Lacs
pune
Hybrid
What's in it for you? Data Scientist Actimize Premier is seeking a Data Scientist / Analyst (Statistics, Applied Mathematics- Mandatory) to design, d evelop, and optimize cutting-edge algorithms and machine learning solutions for financial fraud prevention and anti-money laundering (AML) applications. You will work on behavioral analytics and machine learning models while mentoring junior team members and collaborating closely with cross-functional teams. This role provides an opportunity to contribute to innovative, impactful products at the forefront of financial crime prevention technology. Key Responsibilities: Develop and optimize advanced machine learning models and algorithms for fraud detection and AML applications. Mentor and guide junior data scientists and analysts, fostering a collaborative and high-performance team environment. Leverage cloud platforms (AWS, Azure, Google Cloud) to implement scalable AI/ML solutions. Contribute to the design and implementation of core algorithms, mathematical models, and data-driven solutions. Explore and apply emerging technologies such as Generative AI to enhance fraud detection capabilities. Collaborate with product managers, engineers, and other stakeholders to translate business requirements into robust technical solutions. Perform statistical analysis, data mining, and visualization using tools like Python or R. Drive innovation by researching and integrating the latest advancements in data science and machine learning. Support the team in building user behavior models, leveraging Bayesian statistics, and exploring advanced techniques like social network analysis. Skills and Experience Required: Educational Background: Master’s or Ph.D. in Statistics, Applied Mathematics, Data Science, Computer Science, Electrical Engineering, or a related quantitative field. Professional Experience: 2 –4 years of experience in algorithm development, statistical analysis, and machine learning. Hands-on experience in applying advanced machine learning techniques to real-world datasets in financial fraud prevention, AML, or similar domains. Technical Expertise: Proficiency in Python for statistical analysis, data modeling, and visualization. Experience with cloud technologies and platforms (AWS, Azure, or Google Cloud). Solid understanding of databases and SQL (e.g., MySQL). Exposure to generative AI techniques and their applications in data science. Soft Skills and Teamwork: Strong mentoring and leadership skills, with a proven ability to guide and develop junior team members. Excellent problem-solving skills with a pragmatic approach to balancing theory and practical application. Effective communication skills to collaborate across teams and present complex ideas to stakeholders. Resourceful, adaptable, and passionate about financial crime prevention technologies. Preferred Qualifications: Knowledge of user behavior modeling and Bayesian statistics. Experience in natural language processing (NLP). Familiarity with tools and libraries for generative AI (e.g., Transformer models). Understanding of the financial crime prevention domain and its associated challenges. Why Join Us? At Actimize Premier, you will play a critical role in developing industry-leading solutions to combat financial fraud and money laundering. This role offers the opportunity to work on innovative technologies, mentor a talented team, and make a tangible impact in the fight against financial crime. Join us to lead the evolution of AI-driven fraud detection and AML technologies. Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8013 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 3 hours ago
1.0 - 5.0 years
6 - 10 Lacs
bengaluru
Work from Office
We are seeking a highly skilled Ontology Expert & Knowledge Graph Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring accurate representation and integration of complex data sets. You will leverage industry best practices to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs that drive data-driven decision-making and innovation within the company. Job Purpose: The role of Ontology & Knowledge Graph / Data Engineer is to design, develop, implement, and maintain enterprise ontologies in support of Organizations Data Driven Digitalization strategy. This role combines architecture ownership with hands-on engineering: you will model ontologies, stand up graph infrastructure, build semantic pipelines, and expose graph services that power search, recommendations, analytics, and GenAI solutions for our organization. Seeking highly skilled motivated expertise to drive the development and shape the future of enterprise AI by designing and implementing large-scale ontologies and knowledge graph solutions. Youll work closely with internal engineering and AI teams to build scalable data models that enable advanced reasoning, semantic search, and agentic AI workflows. Key Responsibilities: 1.Ontology Development: Design and apply ontology principles to improve semantic reasoning and data integration, ensuring alignment with business requirements and industry standards. Collaborate with domain experts, product managers and customers to capture and formalize domain knowledge into ontological structures and vocabularies & improve data discoverability. Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Integrate Semantic Data Models with existing data infrastructure and applications 2.Knowledge Graph Implementation & Data Integration: Design and build knowledge graphs based on ontologies. Create\Build Knowledge Graph based on the data from multiple sources while ensuring data integrity and data consistency. Collaborate with data engineers for data ingestion and ensure smooth integration of data from multiple sources Administer and maintain graph database solutions, including both Semantic and Property Graphs Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. 3.Data Quality and Governance: Ensure the quality, accuracy, and consistency of ontologies, and knowledge graphs. Define and implement data governance processes and standards for ontology development and maintenance. 4.Collaboration And Communication: Collaborate with internal engineering teams to align data architecture with Gen AI capabilities Leverage on AI techniques by aligning knowledge models with RAG pipelines and agent orchestration Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. Research and Innovation: Stay up to date with the latest advancements in the field of NLP, LLM and machine learning and proactively identify opportunities to leverage new technologies for improved solutions. Experience: 46 years of industrial experience in AI [OR] Data Science [OR] Data Engineering. 23 years of hands-on experience building ontologies and knowledge systems. Proficiency with graph databases such as Neo4j, GraphDB [RDF based]. Understanding of semantic standards like OWL, RDF, W3C and property graph approaches. Familiarity with Gen AI concepts including retrieval-augmented generation and agent-based AI. Required Knowledge/Skills, Education, and Experience: Bachelors or masters degree in computer science, Data Science, Artificial Intelligence, or a related field, or a specialization in natural language processing is preferred. Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. Proficiency in Python and other programming languages used for data engineering. Experience with NLP and GEN AI based Frameworks [Langchain, Langgraph] Good working project experience in cloud computing i.e., AWS/ Azure/GCP cloud Services including VPCs, EBS, ALBs, NLBs, EC2, S3, and so on so forth.
Posted 3 hours ago
10.0 - 12.0 years
30 - 37 Lacs
bengaluru
Work from Office
Job Summary: We are seeking a skilled Software Developer with expertise in Java, Spring Boot, and microservices architecture. The ideal candidate will have a strong understanding of object-oriented programming (OOP) concepts, design patterns, and multithreading. The candidate should also possess basic knowledge of database management. This role involves collaborating with cross-functional teams to design, develop, and maintain robust software solutions. Overall Responsibilities: Design, develop, and maintain scalable and high-performance applications using Java and Spring Boot. Implement microservices architecture to enhance application modularity and scalability. Collaborate with product managers, architects, and other stakeholders to gather requirements and translate them into technical specifications. Conduct code reviews and ensure code quality through unit testing and best practices. Troubleshoot and resolve software defects and performance issues. Participate in the full software development lifecycle, including requirements analysis, design, development, testing, deployment, and support. Stay updated with the latest industry trends and technologies to continuously improve application performance and user experience. Technical Skills: Must-Have Skills: Core Java: Strong understanding of OOP concepts, collections, and multithreading. Spring Framework: Proficiency in Spring Boot, Spring MVC, and Spring Data. Microservices: Experience in designing and developing microservices-based architectures. Design Patterns: Knowledge of common design patterns (e.g., Singleton, Factory, Observer). Database: Basic understanding of SQL and experience with relational databases (e.g., MySQL, PostgreSQL). Version Control: Familiarity with Git for version control. Preferred Skills: Cloud Platforms: Experience with AWS, Azure, or Google Cloud. Containerization: Knowledge of Docker and Kubernetes. CI/CD: Familiarity with Continuous Integration and Continuous Deployment practices. Testing Frameworks : Experience with JUnit, Mockito, or similar testing frameworks. Experience: 5 to 12 years of software development experience. Proven track record of delivering high-quality software solutions in a fast-paced environment. Experience working in Agile development environments. Day-to-Day Activities: Write clean, maintainable, and efficient code while following coding standards. Participate in daily stand-up meetings and provide updates on progress and challenges. Work closely with QA teams to ensure high-quality deliverables. Analyze and improve system performance and reliability. Document code, design specifications, and system architecture. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. A Masters degree is a plus. Relevant certifications in Java, Spring, or cloud technologies (optional but preferred). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team. Adaptability to changing technologies and processes. Strong attention to detail and commitment to quality.
Posted 3 hours ago
5.0 - 7.0 years
15 - 25 Lacs
pune
Work from Office
Job Summary: We are seeking a skilled Software Developer with expertise in Java, Spring Boot, and microservices architecture. The ideal candidate will have a strong understanding of object-oriented programming (OOP) concepts, design patterns, and multithreading. The candidate should also possess basic knowledge of database management. This role involves collaborating with cross-functional teams to design, develop, and maintain robust software solutions. Overall Responsibilities: Design, develop, and maintain scalable and high-performance applications using Java and Spring Boot. Implement microservices architecture to enhance application modularity and scalability. Collaborate with product managers, architects, and other stakeholders to gather requirements and translate them into technical specifications. Conduct code reviews and ensure code quality through unit testing and best practices. Troubleshoot and resolve software defects and performance issues. Participate in the full software development lifecycle, including requirements analysis, design, development, testing, deployment, and support. Stay updated with the latest industry trends and technologies to continuously improve application performance and user experience. Technical Skills: Must-Have Skills: Core Java: Strong understanding of OOP concepts, collections, and multithreading. Spring Framework: Proficiency in Spring Boot, Spring MVC, and Spring Data. Microservices: Experience in designing and developing microservices-based architectures. Design Patterns: Knowledge of common design patterns (e.g., Singleton, Factory, Observer). Database: Basic understanding of SQL and experience with relational databases (e.g., MySQL, PostgreSQL). Version Control: Familiarity with Git for version control. Preferred Skills: Cloud Platforms: Experience with AWS, Azure, or Google Cloud. Containerization: Knowledge of Docker and Kubernetes. CI/CD: Familiarity with Continuous Integration and Continuous Deployment practices. Testing Frameworks : Experience with JUnit, Mockito, or similar testing frameworks. Experience: 5 to 12 years of software development experience. Proven track record of delivering high-quality software solutions in a fast-paced environment. Experience working in Agile development environments. Day-to-Day Activities: Write clean, maintainable, and efficient code while following coding standards. Participate in daily stand-up meetings and provide updates on progress and challenges. Work closely with QA teams to ensure high-quality deliverables. Analyze and improve system performance and reliability. Document code, design specifications, and system architecture. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. A Masters degree is a plus. Relevant certifications in Java, Spring, or cloud technologies (optional but preferred). Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team. Adaptability to changing technologies and processes. Strong attention to detail and commitment to quality.
Posted 3 hours ago
6.0 - 10.0 years
17 - 30 Lacs
mohali, chandigarh, panchkula
Work from Office
—Design, develop, maintain robust, secure, high-performance web apps using PHP & the Laminas —Build, maintain front-end interfaces using HTML5, CSS3 & JavaScript —Write clean, modular & well-documented code following best practices & coding standards Required Candidate profile —6+ years working exp as a PHP Developer —Deep expertise with Laminas and familiarity with MVC architecture & service-oriented design —developing server-side components and APIs with Node.js
Posted 3 hours ago
4.0 - 7.0 years
10 - 14 Lacs
chennai
Work from Office
Must have : Bigdata ,GCP (Bigquery, Dataproc) We are looking for energetic, high-performing and highly skilled data engineers to help shape our technology and product roadmap. You will be part of the fast-paced, entrepreneurial Global Campaign Tracking (GCT) team under Enterprise PersonalizationPortfolio focused on delivering the next generation of global marketing capabilities. The team is responsible for marketing campaign tracking of new accounts acquisition and bounty payments and leverages large scale data engineering technologies, such as such as Adobe Analytics, Google Analytics, SQL, PySpark, GCP, Big Query, Data Proc, Hive, Kafka & Java. Focus: Designs, develops, solves problems, debugs, evaluates, modifies, deploys, and documents software and systems that meet the needs of customer-facing applications, business applications, and/or internal end user applications. Minimum Qualifications: This high energy Engineer must have: A Bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent Hands-on expertise with application design, software development, and automated testing. Strong programming knowledge in SQL, PySpark, Data Proc, Big QueryHands-on experience in Big Data technologies (Spark, Hive) Understanding and experience with UNIX / Shell / Perl / Python scriptingDatabase query optimization and indexingWeb services design and implementation using REST / SOAP and Java is a plus. Experience collaborating with the business to drive requirements/Agile story analysis. Experience with design and coding across one or more platforms and languages as appropriate Bonus skills:Machine learning/data miningObject-oriented design and codingAdobe Marketing Campaign products Roles and Responsibilities Develop and maintain large scale data processing pipeline using PySpark Data Proc, Big Query and SQL. Use Big Query and Data proc to migrate existing Hadoop/Spark/Hive workloads to Google Cloud. Proficient in Big Query to carry out batch and interactive data analysis. Function as member of an agile team by contributing to software builds through consistent development practices (tools, common components, and documentation) Develops and tests software, including ongoing refactoring of code, and drives continuous improvement in code structure and quality Enable the deployment, support, and monitoring of software across test, integration, and production environments
Posted 3 hours ago
2.0 - 5.0 years
6 - 10 Lacs
gurugram
Work from Office
Department ISS Distribution Location Gurgaon Level 2 About your team The ISS Distribution business comprises of Fidelitys Institutional Business Units in the UK, EMEA and Asia Pac and is a strategic area targeted for growth over the coming years. The Technology Department has been acting as the key enablers for the business in achieving their goals. The Institutional portfolio of projects will include a large collection of strategic initiatives as well as tactical ones to support day-to-day operations and strengthen the technical environment. Primary technologies used in these applications are: Java/J2EE, AWS, Snowflake, SpringMVC, React, Layer-7 About your role We are seeking a talented Site Reliability Engineer (SRE) to join our Technology team supporting critical applications within the ISS Production Services . This role blends traditional software engineering practices with reliability-focused operations, aiming to enhance the scalability, availability, and performance of client- and market-facing applications. The SRE will work directly with application development, architecture, DevOps, and business teams to ensure systems are designed and maintained with reliability and performance in mind, while meeting the demanding requirements of financial services operations. About you 1. Reliability & Performance Engineering Partner with development teams to define SLOs, SLIs, and error budgets that align with business needs. Influence the design and architecture of systems to ensure high availability, resilience, and scalability across trading, portfolio management, compliance, and research platforms. Proactively identify bottlenecks and implement performance improvements for latency-sensitive applications. 2. Application Support & Incident Management Serve as an escalation point for production issues affecting business-critical client reporting applications. Perform real-time troubleshooting and root cause analysis during incidents, followed by detailed postmortems and action items. Collaborate with product and operations teams to prioritize and remediate reliability risks . 3. Observability & Automation Implement and evolve observability stacks (metrics, logging, tracing) to provide actionable insights into application health and user experience. Automate manual processes for deployment, monitoring, and incident remediation using scripting and configuration management tools (e.g., Ansible, Terraform, Python). 4. Business Context & Domain Alignment Apply understanding of trading workflows , portfolio analytics , risk management , and regulatory reporting to prioritize engineering efforts. Translate domain-specific requirements into technical reliability strategies for applications handling large volumes of financial data. Experience and Qualifications Required We are seeking a motivated and skilled SRE with 3-4 years of experience to join our team. The ideal candidate should have hands-on experience automation, monitoring, and good knowledge of Containerization concepts. Strong programming/scripting background (e.g., Python, Go, Shell) with a focus on automation and tooling. Deep understanding of distributed systems and modern application architectures (microservices, containers, service mesh). Experience supporting mission-critical applications in a highly regulated financial services environment. Familiarity with event-driven systems , message queues (e.g., Kafka), databases (Oracle), and cloud-native platforms . Knowledge of financial services processes such as trade lifecycle, NAV calculations, order management, and market data integration is highly desirable. Essential Skills: 2+ years of hands-on experience with cloud platforms (e.g., AWS, GCP, Azure) and infrastructure as code practices. Knowledge of ITIL practices, support experience Good knowledge in Oracle database concepts, SQL statements (DML/DDL), stored procedures & Functions Strong collaboration and communication skills, with an ability to influence development teams and business stakeholders. Experience in python and Shell Scripting Understanding container orchestration principles (Kubernetes), and infrastructure-as-code tools Exepience in using monitoring tools like ELK, New Relic Experience of GitHub/Bitbucket as source control tool and build tools like Jenkins, UrbanDeploy Proven ability to work well under pressure and in a team environment Self-motivated, flexible, responsible, and a penchant for quality Ability to work closely with cross-functional teams. Ability to prioritise own activities, work under hard deadlines. Desirable Skills Good analytical, problem-solving and documentation skills. Calm approach when under pressure Solid organisational skills A real desire to do things the right way whilst remaining delivery focused
Posted 4 hours ago
7.0 - 12.0 years
10 - 15 Lacs
pune
Work from Office
We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMCs Intelligent Z Optimization & Transformation products, which help the worlds largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications security, while reducing operational costs and risks. We acquired several companies along BMC is looking for a Senior Java Developer, an innovator at heart, to join our IZOT team of highly skilled software developers. In this role, you will design and develop new features, as well as maintain existing features. You will focus on Backend development of an industry leading SaaS product. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Design and develop new features as well as maintain existing features by adding improvements and fixing defects in complex areas (using Java) Play a vital role in project design to ensure scalability, reliability, and performance are met. Assist in troubleshooting complex technical problems in development and production. Implement methodologies, processes & tools. Work in agile within a Scrum team to meet deadlines and produce high quality features. Collaborate with other teams to develop joint features. To ensure youre set up for success, you will bring the following skillset & experience: 7+ years of experience in Java Backend development Experience with SpringBoot, Docker, Kubernetes Experience working in a multi-threaded environment. You have experience with relational database (Oracle, PGSQL, MSSQL) Self-learner whos passionate about problem solving and technology. Team player with good communication skills in English (verbal and written) Whilst these are nice to have, our team can help you develop in the following skills: Experience with Microservices architecture Public Cloud (AWS, Azure, GCP) Python, Node.js, C/C++ Automation Frameworks such as Robot Framework CA-DNP
Posted 4 hours ago
3.0 - 7.0 years
12 - 17 Lacs
gurugram
Work from Office
Job Summary Synechron is seeking a proficient QA Python Automation Engineer to enhance our software quality assurance processes through developing and maintaining automation solutions. In this position, you will design, implement, and optimize automation scripts and frameworks to improve operational efficiency, reduce manual testing efforts, and ensure the delivery of high-quality software. You will collaborate across QA, DevOps, and Development teams to identify automation opportunities, integrate tools, and promote best practices in automation testing and system automation, supporting the organizations commitment to delivering reliable and efficient digital solutions. Software Requirements Required Skills: Python (version 3.7 or higher), with extensive experience in scripting for automation Automation libraries such as requests, Selenium, PyAutoGUI, Pandas, or similar Automated testing frameworks: PyTest, unittest, or equivalent API integrations with RESTful services Version control: Git (experience with branching, pull requests, and code management) Preferred Skills: Experience with CI/CD tools (Jenkins, GitLab CI, CircleCI) for automated deployment and testing Familiarity with cloud services and automation in cloud environments (AWS, Azure, GCP) Knowledge of configuring and working with testing environments on Windows and Linux platforms Overall Responsibilities Design, develop, and maintain automation scripts, frameworks, and tools to streamline manual testing, deployment, and operational workflows Identify opportunities for automation within testing, deployment, and operational support, and implement scalable solutions Collaborate with QA, DevOps, and IT teams to automate repetitive tasks and improve automation coverage Develop and execution automated test scripts to enhance product quality and reduce manual testing efforts Integrate automation solutions with existing tools, APIs, and cloud services for seamless workflows Monitor, troubleshoot, and optimize automation scripts for performance, reliability, and security Maintain comprehensive documentation of automation processes, frameworks, and workflows Stay current with evolving automation tools, libraries, and best practices in Python and testing methodologies Technical Skills (By Category) Programming Languages: Required: Python (3.7+), scripting expertise for automation purposes Preferred: Additional scripting or programming languages such as JavaScript or Shell scripting Databases/Data Management: Experience with relational databases (e.g., MySQL, SQL Server, PostgreSQL) for data validation and testing Knowledge of working with APIs to fetch and validate data Cloud Technologies: Preferred: Exposure to cloud platforms (AWS, Azure, GCP) for automating deployment or validation workflows Frameworks and Libraries: Selenium, Requests, PyAutoGUI, Pandas, PyTest, unittest Development Tools and Methodologies: Version control with Git CI/CD pipelines (Jenkins, GitLab CI, CircleCI) Agile development practices and test-driven development (TDD) Security Protocols: Basic understanding of secure coding and handling sensitive data during automation processes Experience Requirements 3 to 7 years of hands-on experience in QA automation, scripting, and testing Demonstrable experience building and maintaining automation frameworks and scripts Proven track record of integrating automation solutions within large or complex systems Industry experience in software QA, preferably in financial, banking, or enterprise environments Exposure to cloud environments and APIs is an advantage but not mandatory Day-to-Day Activities Develop and enhance automation scripts for regression testing, API validation, and operational workflows Collaborate with QA, development, and DevOps teams during planning and daily stand-ups Conduct code reviews and ensure adherence to automation standards and best practices Troubleshoot and resolve issues related to automation scripts and workflows Support continuous integration and continuous delivery pipelines through automation Regularly review and improve existing automation frameworks for scalability and robustness Create and maintain documentation of automation architecture, scripts, and workflows Participate in process improvement initiatives to enhance testing efficiency and effectiveness Qualifications Bachelors degree in Computer Science, Information Technology, Engineering, or related field Professional certifications in automation testing (e.g., ISTQB, Certified Selenium Professional) are advantageous Proven experience developing automation solutions in Python for testing and operational automation Familiarity with software development lifecycle (SDLC) and QA processes Professional Competencies Strong analytical and troubleshooting skills with keen attention to detail Excellent communication skills to collaborate effectively within cross-functional teams Ability to manage multiple priorities and deliver high-quality outputs on time Adaptability to new automation tools, practices, and evolving project requirements Continuous learner with an interest in staying updated with the latest automation technologies and best practices Problem-solving mindset focused on creating scalable, efficient, and reliable automation solutions
Posted 4 hours ago
6.0 - 11.0 years
18 - 30 Lacs
hyderabad, bengaluru
Work from Office
Role & responsibilities Primary Skills : AI/ML experience with Azure (AWS SageMaker, Bedrock, Agents, Q) OR AWS (Azure ML, Azure OpenAI, Azure Document Intelligence) OR GCP (Google AI Platform, Vertex AI), Kubernetes, serverless functions, MLOps tools Required Skills & Qualifications: • Bachelors or masters degree in computer science, Artificial Intelligence, Machine Learning, Data Science, or a closely related quantitative field. • 3-5 years of hands-on experience in AI/ML engineering , with a demonstrable focus on Generative AI and/or Agentic AI projects. • Strong practical experience with Generative AI models (e.g., LLMs, Transformers, GANs, Diffusion Models) and their applications. • Hands-on experience in designing, building, and deploying autonomous AI agents or multi-agent systems, utilizing frameworks like LangChain, AutoGen, CrewAI, or LangGraph. • Proficiency in Python and strong experience with major ML frameworks such as TensorFlow, PyTorch, or Keras. • Demonstrated experience with prompt engineering techniques for optimizing LLM behavior. • Proven ability to deploy and manage AI/ML solutions on at least one major cloud platform: o AWS: Experience with services like SageMaker, EC2, Lambda, Bedrock, Open Search. o Azure: Experience with Azure Machine Learning, Azure Functions, Azure OpenAI Service. o GCP: Experience with Google AI Platform (Vertex AI), Cloud Functions. • Solid understanding of machine learning algorithms, deep learning architectures, natural language processing (NLP), and information retrieval techniques (e.g., RAG). • Familiarity with containerization technologies (Docker, Kubernetes). • Experience with MLOps principles and tools for model deployment, monitoring, and lifecycle management. • Excellent problem-solving, analytical, and critical thinking skills. • Strong verbal and written communication skills, with the ability to articulate complex technical concepts to diverse audiences. Mok@teksystems.com
Posted 4 hours ago
5.0 - 7.0 years
5 - 5 Lacs
pune, thiruvananthapuram
Work from Office
Role Proficiency: Act under guidance of Lead II/Architect understands customer requirements and translate them into design of new DevOps (CI/CD) components. Capable of managing at least 1 Agile Team Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates own DevOps solutions for new contexts Codes debugs tests documents and communicates DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install configure troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentors A1 and A2 resources Involved in the Code Review of the team Measures of Outcomes: Quality of deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA for onboarding and supporting users and tickets Outputs Expected: Automated components : Deliver components that automat parts to install components/configure of software/tools in on premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Onboard users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentor and provide guidance to peers Stakeholder Management: Guide the team in preparing status updates keeping management updated about the status Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and in onboarding users Measure Process Efficiency/Effectiveness: Measure and pay attention to efficiency/effectiveness of current process and make changes to make them more efficiently and effectively Stakeholder Management: Share the status report with higher stakeholder Skill Examples: Experience in the design installation configuration and troubleshooting of CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python/Linux/Shell/Perl/Groovy/PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Powershell) Experience in repository Management/Migration Automation - GIT/BitBucket/GitHub/Clearcase Experience in build automation scripts - Maven/Ant Experience in Artefact repository management - Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS/Azure/Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration)/Strong debugging skill in C#/C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker/Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build Branching/Merging Knowledge about containerization Knowledge on security policies and tools Knowledge of Agile methodologies Additional Comments: Designation: Lead I - DevOps Engineering Job Description: As a Devops Engineer, you are responsible for deploying and monitoring IAM products/tools in GCP. You will manage the automation of all cloud components using Infrastructure-as-Code (IaC) principles within a CI/CD pipeline. You will work closely with IAM Product/Capability leads, IAM architecture team, IaaS/Cloud Engineering and CI/CD teams to identify and leverage the best practices for immutable design & deployment of applications in GCP. Responsibilities: Create and manage CI/CD pipelines in Jenkins Create & maintain Terraform scripts for automated deployment of applications in GCP in a reliable and repeatable manner. Create & maintain Chef playbooks, Jenkins pipelines scripts for automated deployment of applications Manage the automation of cloud components using Infrastructure-as-Code (IaC) principles within a CI/CD pipeline Work with the Application Support team to troubleshoot and resolve issues with automated deployment scripts. Provide deployment, administration and infrastructure operations support for the deployed applications/tools in scope Deployment of automated bi-weekly and/or monthly software & infrastructure updates as needed Monitoring the health of the live application environment and responding to s Work collaboratively with architecture teams to implement best practices for infrastructure and application deployments Required Skills: Previous experience as an Infrastructure Engineer, Systems Administrator, Systems Engineer or Developer Infrastructure and application deployment automation experience using Terraform and Jenkins to public clouds (GCP and/or AWS) Strong sense of team and group collaboration Experience with Agile methodologies Experience automating system administration tasks, deployments, and other repeatable tasks Experience with container technologies (Docker/GKE Cluster) is a plus Experience with Python and other scripting languages is a plus Experience with Packer is a plus Experience with GCP and/or AWS IAM Policies, Roles and bindings is a plus Notes for WFM or TALead I - DevOps Engineering - 8+ Years Required Skills Kubernetes,Terraform,Devops
Posted 4 hours ago
5.0 - 10.0 years
16 - 27 Lacs
hyderabad
Work from Office
Role & responsibilities Job Title: Software Dev Engineer III Location: Hyderabad Duration: 6 Months Job Type: Contract Work Type: Onsite Job Description: Top 3 Responsibilities Coding, Testing, Deployment, Creating new configuration, Service migrations Leadership Principles Deliver Results Ownership Dive Deep Mandatory Requirements Java, Cloud Knowledge(Preferable AWS) Preferred skills AWS Knowledge on Databases, EC2 instances, SQS and VPC. Education or Certification Btech or equivalent bachelor degree Notice: Immediate Joiners only Shift : 8 hours/ IST Exp: 5+ Years Preferred candidate profile
Posted 4 hours ago
3.0 years
0 Lacs
roorkee, uttarakhand, india
Remote
Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description Join us in revolutionizing customer experiences with our client, a global leader in cloud contact center software. Our client brings the power of cloud innovation to enterprises worldwide, enabling businesses to deliver seamless, personalized, and delightful customer interactions. About the Project: This initiative is part of a next-generation digital engagement platform aimed at transforming how businesses connect with customers across multiple channels. The primary focus is the integration of Aqua, an advanced outbound communication solution, into our digital ecosystem. Aqua is widely used by healthcare providers, enterprises, and customer-centric organizations to deliver appointment reminders, test results, marketing campaigns, and personalized notifications—while tracking user engagement in real time. The project is structured into three key phases: SMS channel integration, Email channel integration and WhatsApp channel integration. Responsibilities: Utilize a custom Selenium-based automation framework to perform thorough testing of products. Create and implement new test scripts for end-to-end product testing using automation frameworks. Develop automated test cases using Python, Java, depending on the project requirements. Review and interpret results from executed tests, leveraging framework logs, product logs, and traffic dumps to identify and diagnose issues. Maintain and support the existing automation framework to improve coverage, stability, and capabilities. Identify and address weak points in current automation processes, driving continuous improvement. Collaborate closely with Development teams to align test automation activities with company priorities and strategy. Be available for on-call rotation (once every 2–4 weeks) starting from the 3rd–4th month on the project. Qualifications 3+ years’ experience in software testing or QA, preferably in SaaS or web applications. Experience with Web UI Automation using tools like Selenium. Hands-on experience with Python 3x and Java. Familiarity with programming concepts and scripting languages Good knowledge of UNIX/Linux. Experience in back-end testing, API testing, or microservices testing. Practical experience with MySQL. Experience with version control and bug-tracking systems (JIRA, Git, etc.). Troubleshooting and analytical skills, basic log analysis. Good technical English reading and writing skills. Nice to have: Experience with AWS/GCP/Azure automation frameworks for CI/CD processes. Experience in front-end testing of web applications. Familiarity with Java IDEs (Eclipse, IntelliJ, etc.). We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.
Posted 4 hours ago
6.0 - 10.0 years
12 - 18 Lacs
hyderabad, pune, bengaluru
Work from Office
Job Profile As a member of the development group, you will become part of a team that develops and maintains one of Coupas software products developed using Ruby and React, built as a multi-tenant SaaS solution on all Cloud Platforms like AWS, Windows Azure & GCP. We expect that you are a strong leader with extensive technical experience. You have a well-founded analytical approach to finding good solutions, a strong sense of responsibility, and excellent skills in communication and planning. You are proactive in your approach and a strong team player. What you will do: Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for Coupa data, as well as external spend data Leverage data classification AI algorithms to cleanse and harmonize data Own data modelling, microservice orchestration, monitoring & alerting Build solid expertise in the entire Coupa application suite and leverage this knowledge to better design application and data frameworks. Adhere to Coupa iterative development processes to deliver concrete value each release while driving longer-term technical vision. Engage with cross-organizational teams such as Product Management, Integrations, Services, Support, and Operations, to ensure the success of overall software development, implementation, and deployment. What you will bring to Coupa: Bachelors degree in computer science, information systems, computer engineering, systems analysis or a related discipline, or equivalent work experience. 4 to 8 years of experience building enterprise, SaaS web applications using one or more of the following modern frameworks technologies: Java/ .Net/C etc. Exposure to Python & Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Familiarity with a MVC framework such as Django or Rails Full stack web development experience with hands-on experience building responsive UI, Single Page Applications, reusable components, with a keen eye for UI design and usability. Understanding of micro services and event driven architecture Strong knowledge of APIs, and integration with the backend Experience with relational SQL and NoSQL databases such MySQL / PostgreSQL / AWS Aurora / Cassandra Proven expertise in Performance Optimization and Monitoring Tools. Strong knowledge of Cloud Platforms (e.g., AWS, Azure, or GCP) Experience with CI/CD Tooling and software delivery and bundling mechanisms Nice to have: Expertise in Python & Familiarity with AI/ML-based data cleansing, deduplication and entity resolution techniques Nice to have: Experience with Kafka or other pub-sub mechanisms Nice to have: Experience with Redis or other caching mechanisms Candidates Profile BE/BTECH, MCA/BCA with Min 5+ Years’ experience in Python Django & Cloud Platforms like AWS, Windows Azure & GCP. Ready for 6 to 12 months contract role in Bangalore, Hyderabad and Pune in Hybrid mode Can join within 15 days
Posted 4 hours ago
2.0 years
0 Lacs
coimbatore, tamil nadu, india
On-site
Role: Performance Test Engineer (2+ years’ experience) Work Location: Coimbatore (Onsite, work from office) To apply: Please share the below to gayathri@steam-a.com and preeti@steam-a.com 1. Phone number: 2. Email id: 3. Location: 4. Total number of years of experience: 5. How many years of Performance Test Engineering experience do you have? 6. How many years of testing serverless systems do you have? 7. How many years of testing traditional server-based systems do you have? 8. How many years of practical experience do you have with tools such as k6, Apache JMeter, Artillery, Gatling, or similar? 9. How many years of experience do you have monitoring performance using tools like CloudWatch, X-Ray , or equivalent? 10. What is your notice period/availability (Please specify days/weeks/months)? 11. Max budget for this role is 6LPA. What is your current CTC? 12. Max budget for this role is 6LPA. What is your expected CTC? 13. Based on the progress of the interviews we may request you to share your latest pay slips. Please acknowledge that you are OK with this. 14. This is an office-based role , and all staff are expected to work form our Steam-A office in Coimbatore. Please acknowledge that you are OK with this. 15. Anything else you’d like to share that will help us with your application? 16. Updated CV Job Summary: We are seeking a skilled Performance Test Engineer with hands-on experience in testing both serverless and traditional server-based systems , as well as mobile applications . The ideal candidate will have a strong understanding of performance testing tools, cloud platforms (AWS/Azure/GCP), CI/CD pipelines, and mobile environments. You will be responsible for identifying bottlenecks, simulating load, and ensuring the scalability, reliability, and efficiency of applications under varying loads and network conditions. Key Responsibilities: Design, develop, and execute performance, load, and stress tests for applications built on serverless (e.g., AWS Lambda) and server-based (e.g., Node.js, Java) architectures. Plan and conduct mobile performance testing across different devices and network conditions to simulate real-world usage. Collaborate with development, DevOps, and mobile teams to define test scenarios based on real-world workloads , SLAs , and user behaviour patterns . Analyses test results to identify system bottlenecks, CPU/memory utilization issues, and latency problems across both web and mobile platforms . Monitor and benchmark API performance, infrastructure scalability, third-party system integrations, and mobile responsiveness. Use cloud-native tools and third-party solutions (e.g., AWS X-Ray, CloudWatch, k6 , JMeter , Gatling , Artillery ) to simulate and monitor traffic. Automate performance tests and integrate them into CI/CD pipelines . Generate detailed test reports with actionable insights and optimization recommendations for both web and mobile systems. Continuously refine performance testing strategies for scalability, cost-efficiency , mobile performance , and test coverage . Required Skills & Experience: 2+ years of hands-on experience in performance and load testing . Practical experience with tools such as k6 , Apache JMeter , Artillery , Gatling , or similar. Solid understanding of serverless services (AWS Lambda, Step Functions, API Gateway) and server-based systems (e.g., EC2, containerized APIs). Experience monitoring performance using tools like CloudWatch , X-Ray , or equivalent. Familiarity with distributed tracing tools such as Open Telemetry , Jaeger , or AWS X-Ray . Proficiency in JavaScript/Node.js, Java , or Python for scripting and automation. Familiarity with CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins) and experience embedding performance tests into workflows. Experience with mobile performance testing using tools like Charles Proxy , Firebase Performance Monitoring , Xcode Instruments , or Android Profiler . Knowledge of API protocols (REST, WebSocket’s), authentication mechanisms, and latency-related factors. Experience in cloud environments, preferably AWS . Strong understanding of auto-scaling mechanisms in both serverless and traditional environments. Nice to Have: Proficiency with IaC tools like Terraform, AWS CloudFormation, or Serverless Framework. Knowledge of event-driven architectures and message queues like Amazon SQS, Kafka, or RabbitMQ. Awareness of security and compliance considerations in performance testing (e.g., rate limiting, HIPAA, GDPR). Basic understanding of front-end performance testing using Lighthouse, Webpage Test, or Sitespeed.io. Experience with Real User Monitoring (RUM) tools like New Relic Browser, Datadog RUM, or Google Analytics. Mobile performance testing exposure across platforms (iOS/Android) and networks (3G/4G/5G) including battery usage, cold start time, and memory profiling. Test data generation using Mockaroo, Faker.js, or custom scripts. Looking forward to receiving your applications. Thank you.
Posted 4 hours ago
5.0 - 10.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Job Title: Senior Data Architect Year of Experience: 5 - 10 Years Job Description: The Senior Data Architect will design, govern, and optimize the entire data ecosystem for advanced analytics and AI workloads. This role ensures data is collected, stored, processed, and made accessible in a secure, performant, and scalable manner. The candidate will drive architecture design for structured/unstructured data, build data governance frameworks, and support the evolution of modern data platforms across cloud environments. Key responsibilities: · Architect enterprise data platforms using Azure/AWS/GCP and modern data lake/data mesh patterns · Design logical and physical data models, semantic layers, and metadata frameworks · Establish data quality, lineage, governance, and security policies · Guide the development of ETL/ELT pipelines using modern tools and streaming frameworks · Integrate AI and analytics solutions with operational data platforms · Enable self-service BI and ML pipelines through Databricks, Synapse, or Snowflake · Lead architecture reviews, design sessions, and CoE reference architecture development Technical Skills · Cloud Platforms: Azure Synapse, Databricks, Azure Data Lake, AWS Redshift · Data Modeling: ERWin, dbt, Power Designer · Storage & Processing: Delta Lake, Cosmos DB, PostgreSQL, Hadoop, Spark · Integration: Azure Data Factory, Kafka, Event Grid, SSIS · Metadata/Lineage: Purview, Collibra, Informatica · BI Platforms: Power BI, Tableau, Looker · Security & Compliance: RBAC, encryption at rest/in transit, NIST/FISMA Qualification · Bachelor’s or Master’s in Computer Science, Information Systems, or Data Engineering · Microsoft Certified: Azure Data Engineer / Azure Solutions Architect · Strong experience building cloud-native data architectures · Demonstrated ability to create data blueprints aligned with business strategy and compliance.
Posted 4 hours ago
4.0 years
0 Lacs
mohali district, india
On-site
Job Summary: We are seeking a skilled and motivated Java Developer with strong expertise in Spring Boot and prior experience in the banking or financial services domain. The ideal candidate will be responsible for designing, developing, and maintaining backend services for enterprise applications. This is a full-time, on-site role based in Mohali, with occasional travel based on project/client needs. Key Responsibilities: Develop, test, and maintain backend services and APIs using Java and Spring Boot Collaborate with front-end developers, QA engineers, and product teams to deliver high-quality solutions Ensure best practices in code quality, security, and performance Work closely with clients to understand business requirements and implement them effectively Troubleshoot and resolve technical issues during development and deployment Participate in code reviews, daily stand-ups, and project planning meetings Travel to client locations if required, based on project needs Required Skills: 4+ years of hands-on experience in Java development Proficiency in Spring Boot and related technologies (Spring MVC, JPA, etc.) Strong experience working in the banking or financial domain (mandatory) Good understanding of RESTful APIs, microservices architecture, and database systems Familiarity with tools like Git, Maven, Jenkins, JIRA, etc. Strong problem-solving, debugging, and communication skills Preferred Qualifications: Experience working in Agile/Scrum teams Exposure to cloud platforms (AWS, Azure, or GCP) is a plus Bachelor's degree in Computer Science, Information Technology, or related field
Posted 4 hours ago
2.0 - 4.0 years
0 Lacs
bengaluru, karnataka, india
On-site
We are Brainlabs, the High-Performance media agency, on a mission to become the world's biggest and best independent media agency. We plan and buy media that delivers profitable and sustained business impact. And what’s our formula? Superteams of Brainlabbers, fueled by data and enabled by technology. Brainlabs has always been a culture-first company. In fact, from the very beginnings of the agency a set of shared principles, philosophies and values was documented in The Brainlabs Handbook, helping us create our unique culture. As with everything here we always seek to adapt and improve so The Brainlabs Handbook has been fine-tuned to become The Brainlabs Culture Code. This Culture Code consists of 12 codes that talk to what it means to be a Brainlabber. It’s a joint commitment to continuous development and creating a company that we can all be proud of, where Brainlabbers can turn up to do great work, make great friends and win together. You can read The Brainlabs Culture Code in full here. Description: We are looking for a passionate and energetic Analytics Implementation Engineer to join our Data Analytics (Data Platforms and Infrastructure) department. The candidate will use TealiumiQ tag manager and Google Tag Manager to implement Google Analytics 4 (GA4) and marketing tags for clients, collecting user behavioral data. The role involves both server-side and client-side implementation, validation and auditing, requirement gathering, and managing solution design reference documentation Responsibilities: Work as a specialist, able to understand clients’ analytics requirements, translate that into executable technical projects, create measurement framework, and implement the analytics tags Able to document Solution Design Reference (SDRs), technical specifications documents, tag collection guides, data layer recommendations etc., Implement the analytics tags and marketing pixels using Tealium iQ and GTM (Google Tag Manager) Able to work on JavaScript, ES6, jQuery functions, HTML, and CSS to capture required data from the webpage Investigate any discrepancy in the GA4 reports and identify the implementation issue and fix it Interact with various stakeholders like clients, employees & management. Able to solely manage client Able to employ best practices in tagging and able to validate tags Must be very organized and able to balance working on multiple projects/tasks and small enhancements. Skills & Qualifications: Overall experience of 2 to 4 years in which he/she should have hands-on analytics implementation(tagging) experience in Tealium iQ and Google Analytics 4 (GA4) Server-side tagging experience using Tealium Eventstream and consent management experience using any CMP platform Google Tag Manager experience is an added advantage Bachelor’s Degree / Master's degree in any discipline Excellent verbal and written communication skills. Excellent in JavaScript, ES6, jQuery functions, HTML, CSS to capture required data from the webpage Excellent understanding of GA4 platform and understands platform differences between Universal Analytics (UA) and GA4 Strong Tag Validation & Reports Validation skills. Confident in thorough pre and post publish QA. Knows how to pull data from GA4 exploration reports and can understand how to create segments to analyse data based on the client requirements. Hands-on experience with the SQL basics, Big Query platform and GCP is an added advantage Hands-on experience on any other tag management solutions (TMS) such as Adobe Launch etc., is an added advantage Experience in CDPs like Tealium Audience Stream, Blue conic etc., is an added advantage Angular js, AJAX experience & front-end development is an added advantage Knowledge of digital marketing ecosystem (SEO, Search, Social, Programmatic, Ad Operations etc.) is an added advantage Mobile App Analytics Implementation or A/B Test Configuration experience is an added advantage What happens next? We know searching for a job is tough and that you want to find the best career and employer for you. We also want to ensure that this position is the best fit for both you and us. Therefore, you will participate in a comprehensive interview process that includes skills interviews with our team. The goal of this process is to allow you to get to know us as we learn more about you. Brainlabs actively seeks and encourages applications from candidates with diverse backgrounds and identities. We are proud to be an equal opportunity workplace: we are committed to equal opportunity for all applicants and employees regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion, or belief, and marriage and civil partnerships. If you have a disability or special need that requires accommodation during the application process, please let us know! Please note that we will never ask you to transfer cash or make any other payment to us in order to apply for a role or to work for Brainlabs. Any such asks are fraudulent and should be reported to the appropriate authorities in your area.
Posted 4 hours ago
5.0 years
0 Lacs
madurai, tamil nadu, india
On-site
We are looking for a talented AI/ML Engineer to design, train, and deploy a custom AI chatbot powered by Large Language Models (LLMs) for our organization. Key Responsibilities: Train & fine-tune LLMs for domain-specific chatbot applications Build conversational workflows using LangChain / RAG pipelines Integrate chatbot with websites, CRM, and social platforms Deploy scalable solutions on cloud platforms (AWS/GCP/Azure) Work with datasets to improve accuracy & personalization Required Skills: Strong in Python, TensorFlow/PyTorch, HuggingFace Transformers Knowledge of LLM fine-tuning, prompt engineering, and vector databases Experience with APIs, Docker, Kubernetes, FastAPI/Flask Familiarity with chatbot frameworks & conversational AI Location: Onsite Experience Level: Experience 2–5 years / Senior Apply here: hr@professoracademy.com 👉 If you are passionate about AI, NLP, and chatbot development , we’d love to connect with you!
Posted 4 hours ago
5.0 years
0 Lacs
gurugram, haryana, india
On-site
Hiring for Technical Lead At Procol, a Technical Lead is not just a team leader—they are the driving force behind our engineering excellence, enabling high-performing teams to build scalable, secure, and customer-centric products. This role is key to aligning engineering execution with business goals, shaping team culture, and delivering best-in-class technology outcomes. We’re looking for a hands-on leader who thrives in a fast-paced startup environment, is passionate about building and scaling engineering teams, and has a deep understanding of modern B2B SaaS architectures. If you enjoy mentoring developers, solving technical challenges, and collaborating cross-functionally to ship high-impact features—this role is for you. What you will do: Lead High-Performing Teams: Hire, grow, and mentor a team of engineers, fostering a culture of ownership, collaboration, and continuous improvement. Drive Technical Excellence: Ensure strong engineering practices—from code quality to architecture decisions—while balancing speed and stability. Own Delivery: Manage timelines, execution, and quality for critical initiatives by working closely with product, design, and business teams. Scale Systems: Architect and scale backend/frontend systems that power high-volume, low-latency B2B workflows. Promote Engineering Culture: Champion best practices in software development, testing, and devops—driving a culture of engineering excellence. Set the Bar: Establish and monitor performance metrics for the team, ensuring accountability and consistent progress. Bridge Strategy and Execution: Translate high-level business goals into actionable technical roadmaps and team objectives. What would you bring to the table: 5+ years of experience in software development, with at least 2-3 years in a leadership role. Strong foundation in computer science fundamentals and scalable system design. Experience leading full-stack development teams in a high-growth SaaS or enterprise tech environment. Proven ability to manage and ship complex projects in fast-paced, ambiguous environments. Hands-on expertise with modern backend frameworks (e.g., Ruby on Rails, Node.js, Go, Java) and frontend technologies (e.g., React, JavaScript). Proficiency in cloud infrastructure (e.g., AWS, GCP), CI/CD, observability, and security best practices. Excellent interpersonal and communication skills with a knack for cross-functional collaboration. A people-first mindset with a passion for mentoring and enabling others to succeed.
Posted 4 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
42191 Jobs | Dublin
Wipro
20399 Jobs | Bengaluru
Accenture in India
18439 Jobs | Dublin 2
EY
16839 Jobs | London
Uplers
12252 Jobs | Ahmedabad
Amazon
10965 Jobs | Seattle,WA
Accenture services Pvt Ltd
10573 Jobs |
Bajaj Finserv
10403 Jobs |
Oracle
9913 Jobs | Redwood City
IBM
9883 Jobs | Armonk