Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location(s): Tower -11, (IT/ITES) SEZ of M/s Gurugram Infospace Ltd, Vill. Dundahera, Sector-21, Gurugram, Haryana, Gurugram, Haryana, 122016, IN Line Of Business: Climate COE(ClimCOE) Job Category Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. As a Senior Software Engineer - you will design, implement, and maintain scalable and reliable SaaS solutions. Your role will involve collaborating with cross-functional teams and stakeholders to ensure seamless integration and deployment of features, integrations, and bug fixes. You will play a crucial role in enhancing the efficiency and effectiveness of our software development lifecycle, ensuring the highest level of service quality for customers and stakeholders. Skills And Competencies 7+ years of JavaScript/TypeScript programming experience; experience with UI libraries (React, remix, Vue.js, etc.) Sound working knowledge of writing complex reusable UI components, microservices style of architecture, and creation and consumption of REST APIs Strong analytical and problem-solving abilities, capable of working independently and collaboratively Hands-on experience with AWS, Azure, or GCP, and familiarity with cloud-native architecture Experience contributing throughout the Software Development Life Cycle experience including planning, designing, development, unit testing, other testing and debugging Education Bachelor’s Degree in Mathematics or Computer Science or equivalent experience Responsibilities Developing new user-facing features using React JS using remix/react-router Collaborate with product owners and QA analysts to define requirements and prioritize tasks Conduct code reviews to ensure code quality, performance, and scalability Collaborate with cross-functional teams to define, design, and deliver new features Work closely with team members to ensure successful delivery and implementation of tasks, liaising with management as needed. Assist and guide less experienced team members, fostering a culture of learning and growth Troubleshoot and resolve infrastructure-related issues, collaborating with cross-functional teams for effective solutions Manage and support infrastructure and development applications to ensure optimal performance, availability, and security Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio is looking for an Architect to design and implement complex systems and software architectures across multiple platforms. The ideal candidate will have extensive experience in systems architecture, software engineering, cloud technologies, and team leadership. You will be responsible for translating business requirements into scalable, maintainable technical solutions and guiding development teams through implementation. Responsibilities Include Design, plan, and manage cloud architectures leveraging AWS, Azure, and GCP, ensuring alignment with business objectives and industry best practices. Evaluate and recommend appropriate cloud services and emerging technologies to enhance system performance, scalability, and security. Lead the development and integration of software solutions using a variety of programming languages (Java, .NET, Python, Golang, etc.). Develop and maintain automated solutions for cloud provisioning, governance, and lifecycle management, utilizing Infrastructure as Code (IaC) tools such as Terraform and Ansible. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Collaborate with cross-functional teams to gather requirements, translate business needs into technical specifications, and deliver robust cloud-native solutions. Guide and mentor development teams, enforcing architectural standards, coding best practices, and technical excellence. Provide expert consultation to internal and external stakeholders, offering recommendations on cloud migration, modernization, and optimization strategies. Ensure compliance with security, regulatory, and cost management policies across cloud environments. Stay current with industry trends, emerging technologies, and best practices, proactively introducing innovations to the organization. Required Skills And Professional Experience 10+ years of experience in software architecture, including significant experience with cloud infrastructure and hyperscaler platforms (AWS, Azure, GCP). Deep expertise in at least one hyperscaler (AWS, Azure, or GCP), with working knowledge of the others. Strong programming skills in multiple languages (Java, C#, Node, JavaScript, .NET, Python, Golang, etc.). Experience with services/micro-services development and relational databases (Postgres, MySQL, Oracle, etc.) Expertise in open-source technologies and NoSQL/RDBMS such as Couchbase, Elasticsearch, RabbitMQ, MongoDB, Cassandra, Redis, etc. Excellent verbal and written communication skills. Knowledge in Project Management tools and Agile Methodologies. Certification in AWS or Azure is preferred. Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs. Show more Show less
Posted 1 day ago
4.0 - 7.0 years
0 - 2 Lacs
Hyderabad
Hybrid
About the Role: We are seeking mid-caliber, hands-on Senior Solution Engineers with strong development and operations experience to join our growing support team. These roles are critical in safeguarding the quality and continuity of our support services as we scale rapidly. Working within the UK time zone, youll collaborate with cross-functional teams to resolve complex issues, optimize cloud-based environments, and support key business applications and integrations. This is a high-impact, hands-on technical role suited for solution-oriented professionals who thrive in dynamic, fast-paced environments. Key Responsibilities: Provide hands-on support and troubleshooting for complex technical issues across AWS, GCP, Azure, Salesforce, and MuleSoft environments. Collaborate with development and DevOps teams to maintain system uptime, application stability, and performance. Proactively identify, investigate, and resolve support escalations to meet SLAs and improve customer satisfaction. Participate in the continuous improvement of support processes, tools, and documentation. Contribute to the design, deployment, and optimization of scalable cloud-based solutions. Serve as a technical liaison between support, engineering, and client teams. Support and maintain CI/CD pipelines, monitoring systems, and infrastructure as code (IaC) where applicable. Drive root cause analysis and implement long-term solutions for recurring problems. Key Requirements: Minimum of 4 years hands-on experience in a technical support or solution engineering role. Strong working knowledge of AWS, GCP, Azure, and Salesforce platforms. Practical experience with MuleSoft and systems integration. Solid understanding of cloud architecture, networking, and security best practices. Experience with DevOps tools and methodologies (e.g., CI/CD, Docker, Kubernetes, Terraform). Strong scripting or programming skills (e.g., Python, Bash, JavaScript). Excellent problem-solving skills and a customer-first mindset. Ability to work independently and collaboratively in a distributed team. Comfortable working in or aligned to the UK timezone. Preferred Qualifications: Relevant certifications (e.g., AWS Certified Solutions Architect, Salesforce Platform Developer. Azure Solutions Architect, MuleSoft Certified Developer). Experience supporting enterprise-grade SaaS products. Exposure to ITIL practices or similar support frameworks.
Posted 1 day ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. Do you enjoy solving challenging problems using the latest technologies within a great team? Is knowing your work will be highly visible and mission critical a key component for the next step in your career? At JumpCloud, we’re looking for best-in-class talent to help define the future of modern identity and device management from the ground up. About the role: JumpCloud is looking for an experienced Software Engineer to join an engineering team focusing on various applications, services running on Windows, Mac or Linux machines/servers, their interaction with the OS/kernel and working with back end services that these applications/services interact with. Device Management services are key parts of the entire JumpCloud product portfolio. Along with our Identity and Directory services, Device Management provides the foundation for our solutions, both cloud and device based. This team’s work will make using JumpCloud easier and frictionless for the management of the fleet of devices while providing a very high level of security. What you’ll be doing: Primarily working with Go, along with Swift, C#, C++, and Node.js for cross-platform applications on Windows, macOS, and Linux Gaining or utilizing expertise in areas like Windows services, kernels, Event Loggers, Mac Launch daemons, and macOS internals Collaborating with architects, UX designers, and DevOps to ensure our systems are highly available, scalable, and deliver exceptional user experiences Working within a Scrum framework to drive agile development Learning and working with mTLS protocols and related security concepts. Prior experience in these areas is a plus Using OAuth/OIDC flows for secure user authentication and service access Writing Unit test cases, Functional test cases, acceptance tests along with automating these test cases Contributing to the future of our Device Management services by participating in strategic planning and scoping sessions with product managers Embodying our core values: building strong connections, thinking big, and striving to improve by 1% every day We’re looking for: 5-10 years experience developing MAC, Windows, or Linux applications (including integration with third-party applications) in a variety of programming languages like Swift, Node JS, C Sharp, C++ and Golang. Experience in one of them is a must Experience using one of the public cloud providers (AWS, GCP or Azure) with CI/CD pipelines (preferably Github Action) to build, test and deploy Willingness to mentor junior members of the team Bonus points if you have experience with Services, event logger, Kernel in Windows OS and/or Launch demon, app hosting in Mac Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
All roles at JumpCloud are Remote unless otherwise specified in the Job Description. About JumpCloud JumpCloud® delivers a unified open directory platform that makes it easy to securely manage identities, devices, and access across your organization. With JumpCloud, IT teams and MSPs enable users to work securely from anywhere and manage their Windows, Apple, Linux, and Android devices from a single platform. JumpCloud is IT Simplified. Do you enjoy solving challenging problems using the latest technologies within a great team? Is knowing your work will be highly visible and mission critical a key component for the next step in your career? At JumpCloud, we’re looking for best-in-class talent to help define the future of modern identity and device management from the ground up. About the role: JumpCloud is looking for an experienced Software Engineer to join an engineering team focusing on various applications, services running on Windows, Mac or Linux machines/servers, their interaction with the OS/kernel and working with back end services that these applications/services interact with. Device Management services are key parts of the entire JumpCloud product portfolio. Along with our Identity and Directory services, Device Management provides the foundation for our solutions, both cloud and device based. This team’s work will make using JumpCloud easier and frictionless for the management of the fleet of devices while providing a very high level of security. What you’ll be doing: Primarily working with Go, along with Swift, C#, C++, and Node.js for cross-platform applications on Windows, macOS, and Linux Gaining or utilizing expertise in areas like Windows services, kernels, Event Loggers, Mac Launch daemons, and macOS internals Collaborating with architects, UX designers, and DevOps to ensure our systems are highly available, scalable, and deliver exceptional user experiences Working within a Scrum framework to drive agile development Learning and working with mTLS protocols and related security concepts. Prior experience in these areas is a plus Using OAuth/OIDC flows for secure user authentication and service access Writing Unit test cases, Functional test cases, acceptance tests along with automating these test cases Contributing to the future of our Device Management services by participating in strategic planning and scoping sessions with product managers Embodying our core values: building strong connections, thinking big, and striving to improve by 1% every day We’re looking for: 5-10 years experience developing MAC, Windows, or Linux applications (including integration with third-party applications) in a variety of programming languages like Swift, Node JS, C Sharp, C++ and Golang. Experience in one of them is a must Experience using one of the public cloud providers (AWS, GCP or Azure) with CI/CD pipelines (preferably Github Action) to build, test and deploy Willingness to mentor junior members of the team Bonus points if you have experience with Services, event logger, Kernel in Windows OS and/or Launch demon, app hosting in Mac Where you’ll be working/Location: JumpCloud is committed to being Remote First, meaning that you are able to work remotely within the country noted in the Job Description. You must be located in and authorized to work in the country noted in the job description to be considered for this role. Please note: There is an expectation that our engineers participate in on-call shifts. You will be expected commit to being ready and able to respond during your assigned shift, so that alerts don't go unaddressed. Language: JumpCloud has teams in 15+ countries around the world and conducts our internal business in English. The interview and any additional screening process will take place primarily in English. To be considered for a role at JumpCloud, you will be required to speak and write in English fluently. Any additional language requirements will be included in the details of the job description. Why JumpCloud? If you thrive working in a fast, SaaS-based environment and you are passionate about solving challenging technical problems, we look forward to hearing from you! JumpCloud is an incredible place to share and grow your expertise! You’ll work with amazing talent across each department who are passionate about our mission. We’re out of the box thinkers, so your unique ideas and approaches for conceiving a product and/or feature will be welcome. You’ll have a voice in the organization as you work with a seasoned executive team, a supportive board and in a proven market that our customers are excited about. One of JumpCloud's three core values is to “Build Connections.” To us that means creating " human connection with each other regardless of our backgrounds, orientations, geographies, religions, languages, gender, race, etc. We care deeply about the people that we work with and want to see everyone succeed." - Rajat Bhargava, CEO Please submit your résumé and brief explanation about yourself and why you would be a good fit for JumpCloud. Please note JumpCloud is not accepting third party resumes at this time. JumpCloud is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. Scam Notice: Please be aware that there are individuals and organizations that may attempt to scam job seekers by offering fraudulent employment opportunities in the name of JumpCloud. These scams may involve fake job postings, unsolicited emails, or messages claiming to be from our recruiters or hiring managers. Please note that JumpCloud will never ask for any personal account information, such as credit card details or bank account numbers, during the recruitment process. Additionally, JumpCloud will never send you a check for any equipment prior to employment. All communication related to interviews and offers from our recruiters and hiring managers will come from official company email addresses (@jumpcloud.com) and will never ask for any payment, fee to be paid or purchases to be made by the job seeker. If you are contacted by anyone claiming to represent JumpCloud and you are unsure of their authenticity, please do not provide any personal/financial information and contact us immediately at recruiting@jumpcloud.com with the subject line "Scam Notice" #BI-Remote Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Azure SQL team in Azure Data at Microsoft is responsible for Azure SQL DB, SQL MI, SQL VM (IaaS) cloud services, SQL Server on-prem and Arc enabled SQL Server. Together they power many of the worlds mission critical databases and are deployed by most of the Fortune 1000 companies. A key part of the database experience are the client SDK's we provide developers for connecting to the SQL family of databases all the way from on-premises to the cloud. These SDK's span a variety of languages and frameworks such as Python, Java, C++, C# and related object relational mapping frameworks. We are on the lookout for a dedicated Sr. Software Engineer with a strong technical background and a keen focus on execution. The core objective of this role is to contribute to the enhancement of SQL Server Client drivers, aiming to improve their performance, reliability, maintainability, and usability. Additionally, you will be involved in refining their integration with various language-specific data frameworks. This role emphasizes technical prowess and execution, ensuring best practices are adhered to and high-quality code reviews are conducted. Responsibilities In this position, you will work with a team of engineers to design, implement, and maintain features in one of the above mentioned SDK's or develop a new SDK as the need arises As a senior engineer you will have the opportunity to develop excellent design skills and grow as an engineer and technical leader to mentor and ramp up new hires You will work closely with program and product managers to ensure we are prioritizing the right customer asks You will improve and monitor telemetry to assess the health of the product in both pre-release as well as released software and use this data to drive improvements into the quality of the product Excellent communication and cross group collaboration skills As a Senior Software Engineer, you will play a pivotal role within your team, leveraging your expertise to tackle technical challenges and enhance the development process. While your influence may primarily be within your team, there is potential for it to expand outward, depending on your ability to drive projects forward and exhibit leadership qualities. Your contributions will be crucial in the execution phase, focusing on delivering high-quality solutions efficiently. Your impact will be felt through your dedication to execution and your ability to mentor peers in technical matters. This role is an opportunity to make a significant impact on our product quality and the overall customer experience, ensuring the team maintains high standards in coding and development practices. We value diversity and inclusion, striving to create an environment that fosters continuous learning and growth for all team members. Qualifications Required/Minimum Qualifications: Bachelor's Degree in Computer Science or related technical discipline AND 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, Python , C#, Java, JavaScript OR equivalent experience Preferred/Additional Qualifications Prior development experience with the Rust language is a plus Prior experience working with ODBC, JDBC, and other database drivers is a plus Prior experience with building API libraries for application developers Building applications/micro-services in Azure, AWS or GCP Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less
Posted 1 day ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Solugenix is a leader in IT services, delivering cutting-edge technology solutions, exceptional talent, and managed services to global enterprises. With extensive expertise in highly regulated and complex industries, we are a trusted partner for integrating advanced technologies with streamlined processes. Our solutions drive growth, foster innovation, and ensure compliance—providing clients with reliability and a strong competitive edge. Recognized as a 2024 Top Workplace, Solugenix is proud of its inclusive culture and unwavering commitment to excellence. Our recent expansion, with new offices in the Dominican Republic, Jakarta, and the Philippines, underscores our growing global presence and ability to offer world-class technology solutions. Partnering with Solugenix means more than just business—it means having a dedicated ally focused on your success in today's fast-evolving digital world. Job Title: Lead AWS Cloud Support Engineer Experience: 8-10 Years Location: Hyderabad/Bengaluru (Work from Office – SODC) Work Timings: 24*7 Rotational shifts Leadership & Strategic Responsibilities: Lead and mentor AWS cloud support teams, ensuring efficient issue resolution and best practices adherence. Define, implement, and continuously refine cloud operations strategy to improve system reliability and performance. Act as the primary escalation point for high-impact incidents, ensuring rapid resolution and minimal downtime. Collaborate with senior management to align AWS infrastructure strategies with business goals. Develop training plans and knowledge-sharing sessions to upskill the team on AWS best practices, automation, and security. Establish and maintain strong governance processes to enforce cloud security and compliance policies. Ensure proper documentation of cloud environments, standard operating procedures (SOPs), and best practices. Drive innovation by exploring new AWS services and technologies to enhance cloud operations. Technical & Operational Responsibilities: Oversee the design, deployment, and optimization of AWS cloud infrastructure, ensuring high availability and scalability. Implement and enforce AWS security policies, IAM configurations, and network access controls. Lead automation and DevOps initiatives, leveraging tools such as Terraform, Ansible, Kubernetes, and CI/CD pipelines. Manage cloud monitoring, logging, and incident response using AWS CloudWatch, Splunk, and Datadog. Collaborate with development, DevOps, and security teams to ensure AWS best practices are followed. Ensure cost optimization and budget control for cloud resources, identifying opportunities for cost savings. Provide guidance on AWS architecture, infrastructure scaling, and performance tuning. Required Skills & Qualifications: Strong leadership experience, with a proven track record of managing and mentoring cloud support teams. Extensive hands-on experience with AWS cloud operations, including infrastructure management, networking, and security. Expertise in AWS services such as EC2, EKS, S3, RDS, Elastic Load Balancing, Auto Scaling, and AWS Lambda. Deep understanding of AWS networking principles, including VPC, security groups, ACLs, and IAM policies. Experience in incident management, including root cause analysis and post-mortem documentation. Strong knowledge of AWS best practices, governance, and compliance standards. Proficiency in monitoring tools such as AWS CloudWatch, Splunk, and Datadog. Hands-on experience in automation and infrastructure-as-code (IaC) using Terraform, Ansible, and Kubernetes. Strong understanding of CI/CD pipelines, DevOps methodologies, and cloud security principles. Excellent problem-solving, analytical thinking, and decision-making skills. Ability to work effectively in a multi-cultural team environment. Fluent in English, both written and spoken. Bachelor's degree in Computer Science, Information Technology, or related field. Preferred Qualifications: AWS Professional or Specialty certifications (e.g., AWS Certified Solutions Architect – Professional, AWS DevOps Engineer – Professional). Experience in managing multi-cloud environments (Azure, GCP) is a plus. Advanced scripting skills in Python, Shell, or PowerShell for automation. Show more Show less
Posted 1 day ago
8.0 years
0 Lacs
India
On-site
About Us SPARK Business Works is a great team, with thriving clients, which results in proud moments. We are a fun, collaborative team that provides custom software development, web development, and digital marketing services for organizations across the country. You’ll find SPARK Ignitors hard at work, full of energy and ready to solve the toughest challenges for our clients. We celebrate creative problem solving and encourage you to invest in yourself and your talents. Our team was recently named to the “Best and Brightest Companies to Work For” and “Inc 5000 Fastest-Growing Private Companies” for our ability to help clients digitize and grow with innovative and practical digital solutions. SPARK is headquartered in Michigan (Kalamazoo & Grand Rapids) with additional offices in Houston, Texas, Bloomington, Illinois, and India. SPARK’s team has served over 400 clients across many industries, including construction, manufacturing, healthcare, and agriculture, to name a few. What You Will Do As a Senior Full Stack Software Developer, you’ll create, maintain, and support the entire functions of projects for our amazing clients. In this role, you’ll be working with a variety of languages, frameworks, and databases, primarily Advanced Javascript, Typescript, PHP, MySQL, HTML/CSS,.NET, and REST APIs technologies. On any given day you might be architecting new features for our clients, refactoring existing code to be more scalable, and seeing changes through to completion in a live environment. You will work in a collaborative team environment with product managers, other developers, systems analysts, and designers throughout the entire project lifecycle to execute and launch applications for clients. You will be a valued contributor on the software development team. Some travel may occasionally be required to meet with clients and project teams. Key Responsibilities Embrace and deliver our BOLT Core Values to each other and our clients Be empathetic and inclusive Overdeliver Love what you do Take ownership and follow through Participate in the entire application lifecycle, focusing on architecting the solution, implementation, documentation, delivery, and support. Write clean, maintainable, and testable code to develop functional web applications. Write unit and integration tests. Troubleshoot and debug applications. Ensure optimal application performance & scaling. Leverage cutting-edge technologies to improve legacy applications. Gather and address technical and design requirements, translating them into clear specifications and tasks for yourself and others. Provide training and support to internal teams. Build reusable code and libraries for future use. Act as a technical resource and guide for cross-functional teams, including developers, designers, and product managers, to identify and define new features. Implementation of DevSecOps environment and initiatives. Provide feedback around the DevSecOps environment and initiatives. Provide technical direction based on emerging technology exploration. Guide less experienced developers, fostering their technical growth and confidence, by occasionally onboarding, mentoring, training, and pair-programming. Occasionally, research and evaluate new integrations, tools, and libraries to enhance the project or development environment. Conduct critical code reviews and provide feedback that encourages best practices and improved implementation strategies. Contribute to documentation eff orts for use by the team and/or clients to promote a shared understanding of the codebase, development processes, and application workfl ow. What You Need 8+ years of relevant experience (preferably a combination of education and professional). Confidence and experience using high-level or object-oriented programming languages and technologies like PHP, Advanced JavaScript, TypeScript, Node.js, HTML5, and CSS3. Familiarity with DevSecOps or Cloud Platforms such as AWS, Azure, or GCP. Strong work ethic and willingness to “go above and beyond”, bringing a passionate and positive approach to delivering results and encouraging teamwork. Ability to write and detail implementation specs or user stories that support eff ective development and execution. Excellent written and verbal communication skills, able to convey complex technical information clearly. Experience in business strategy and solutions. Knowledge of software, web development, databases (relational and experience), mobile development, and technology solutions. Ability to communicate with web developers and designers. Large capacity for attention to detail. Ability to meet tight deadlines. Capable of prioritizing multiple projects in order to meet goals without management oversight. Comfortable with Git version control. Ability to optimize database queries. Ability to refactor and optimize existing code. Comfortable using the Linux terminal. Familiar with Docker in a local development environment. Understanding of accessibility and server security standards and compliance. Exceptional debugging skills. Unafraid to take on difficult challenges and dive into the unknown. NICE TO HAVE A solid understanding of or strong desire to learn the following and other programming languages: Python, .NET, C#, Java, etc. Comfortable with writing automated unit and integration tests. Familiar with building and compiling public assets like CSS and JS with tools like SASS and Webpack or Rollup. Previous experience working with multiple platforms (Desktop, Mobile, Tablet, etc.) Experience managing multiple tasks/projects in an agency environment. Ability to lead inside sales with the best interest of the client. Ability to work with leadership to align on sales and marketing strategies and solutions. Benefits Unlimited PTO (Paid Time Off) Annual Learning & Development Fund Strong Emphasis on Work-Life Balance Flexible Work Arrangements SALARY RANGE ₹12,80,000 - ₹20,40,000 annually SPARK Business Works is an equal opportunity employer and celebrates diversity, equity, and inclusion. Know someone at SPARK? Be sure to submit them as a referral. Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
The Database Engineer will be actively involved in the evaluation, review, and management of databases. You will be part of a team who supports a range of Applications and databases. You should be well versed in database administration which includes installation, performance tuning and troubleshooting. A strong candidate will be able to rapidly troubleshoot complex technical problems under pressure, implement solutions that are scalable, while managing multiple customer groups. What You Will Do Support large-scale enterprise data solutions with a focus on high availability, low latency and scalability. Provide documentation and automation capabilities for Disaster Recovery as part of application deployment. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Knowledge of the configuration of monitoring solutions and the creation of dashboards (DPA, DataDog, Big Panda, Prometheus, Grafana, Log Analytics, Chao Search) What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in database administration, system administration , performance tuning and automation. 1+ years of experience developing and/or administering software in public cloud Experience in managing Traditional databases like SQLServer/Oracle/Postgres/MySQL and providing 24*7 Support. Experience in implementing and managing Infrastructure as Code (e.g. Terraform, Python, Chef) and source code repository (GitHub). Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in designing and building production data pipelines from data ingestion to consumption within a hybrid big data architecture, using Cloud Native GCP, Java, Python, Scala, SQL etc. Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Automation - Uses knowledge of best practices in coding to build pipelines for build, test and deployment of processes/components; Understand technology trends and use knowledge to identify factors that can be used to automate system/process deployments Data / Database Management - Uses knowledge of Database operations and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services; Applies industry best standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes own work; Monitors and measures systems against key metrics to ensure availability of systems; Identifies new ways of working to make processes run smoother and faster Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action; Demonstrates strong written and verbal communication skills Troubleshooting - Applies a methodical approach to routine issue definition and resolution; Monitors actions to investigate and resolve problems in systems, processes and services; Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures; Analyzes patterns and trends Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Work on one or more projects, making contributions to unfamiliar code written by team members. Diagnose and resolve performance issues. Participate in the estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Document code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Write, debug, and troubleshoot code in mainstream open source technologies Lead effort for Sprint deliverables, and solve problems with medium complexity What Experience You Need Bachelor's degree or equivalent experience 2+ years experience working with software design and Java, Python and Javascript programming languages 2+ years experience with software build management tools like Maven or Gradle 2+ years experience with HTML, CSS and frontend/web development 2+ years experience with software testing, performance, and quality engineering techniques and strategies 2+ years experience with Cloud technology: GCP, AWS, or Azure What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We are currently seeking a highly talented and motivated quality engineer to work on SAP Products and Database Technology. As a successful candidate for this role, you will have excellent problem-solving and troubleshooting skills, fluency in test methodologies and cloud concepts, solid communication skills and a desire to solve complex problems of scale which are uniquely SAP. You will have the opportunity to contribute your expertise to SAP HANA Cloud and SAP IQ databases. What You’ll Do- Perform an end-to-end quality cycle; understand requirements, feature design, testing and automation strategy of software components across global deployments at scale. Test planning, test automation, test execution, failure analysis of various modules within SAP Database servers. Work within a team of engineers to drive improvements in overall quality, testing processes and practices, and automation. You will work very closely with product teams in different geographic locations to ensure that the test coverage and delivery quality meet the delivery goals. Build and enhance an advanced automated test framework and suites. Reproduce and analyze complex database engine problems found in-house, of customers and SAP internal stakeholders. Prioritize tasks, develop detailed test plans, and estimate effort required to completion of projects. Analyze performance and stability of the SAP database products. What You Bring- Sound knowledge of Software QA principles and practices. Demonstrated proficiency in test development, test automation and performance testing. Experience with using and administering a DBMS. Experience in one or more of the following programming languages: Java, Python, Shell-Scripting Hands-on experience in Unix (user level and shell scripting) and SQL Experience working in cloud landscapes like AWS, Azure, GCP and tools like Git, Gerrit, Kubernetes, Jenkins, Jira, Docker Experience working in a Continuous Integration and Delivery environment; experience working in cloud technologies. Experience in using any bug tracking and version control systems. Embrace the Agile process; be self-empowered, take ownership of, and responsible for your work; collaborate and communicate effectively with team members and other teams. Strong problem solving and analytical ability. Must have excellent verbal and communication skills. Must have B.Tech or M.Tech degree with 3 - 6 years’ experience. Tech you bring- Experience in one or more of the following programming languages: C, C++, Java, Python, Shell-Scripting Hands-on experience in Unix (user level and shell scripting) and SQL Experience working in cloud landscapes like AWS, Azure, GCP and tools like Git, Gerrit, Kubernetes, Jenkins, Jira, Docker Meet your team- The SAP HANA Database team encompasses global development and product management responsibilities across our portfolio, such as SAP HANA Cloud, Data Intelligence, and SAP Analytics Cloud, and is also a key contributor to the SAP Business Technology Platform. #SAPBTPEXCareers Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 395908 | Work Area: Software-Quality Assurance | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: . Show more Show less
Posted 1 day ago
2.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Qualification Criteri B.E Key Skills:cloud native services, SQL Primary Skill:golang, REST Functional Area:-Software/Web Development Job Location:-Bengaluru, Karnataka, India Job Type:-Contract Key Responsibilities:Design and develop microservices using Golang with a focus on scalability and performance.Build and maintain RESTful APIs to support mobile/web applications and internal systems.Implement secure, efficient, and reusable code components for distributed cloud-native applications.Work with relational databases and write optimized SQL queries for CRUD operations and reporting.Integrate services with cloud platforms (AWS, Azure, GCP) using native SDKs and tools.Collaborate with DevOps engineers to containerize applications using Docker and orchestrate with Kubernetes.Participate in code reviews, unit testing, and continuous integration/delivery (CI/CD) pipelines.Write clear documentation and contribute to architectural decisions for backend systems
Posted 1 day ago
10.0 - 17.0 years
50 - 75 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role: Presales Senior Cloud Data Architect (with Data Warehousing Experience) Employment Type: Full-Time Professional Summary: Onix is seeking an experienced Presales Senior Cloud Data Architect with a strong background in data warehousing and cloud platforms to play a pivotal role in the presales lifecycle and solution design process. This position is key to architecting scalable, secure, and cost-efficient data solutions that align with client business objectives. The ideal candidate will have deep expertise in data architecture, modeling, and cloud data platforms such as AWS and GCP , combined with the ability to lead and influence during the presales engagement phase. Scope / Level of Decision Making: This is an exempt position operating under limited supervision , with a high degree of autonomy in presales technical solutioning, client engagement, and proposal development. Complex decisions are escalated to the manager as necessary. Primary Responsibilities: Presales & Solutioning Responsibilities: Engage early in the sales cycle to understand client requirements, gather technical objectives, and identify challenges and opportunities. Partner with sales executives to develop presales strategies , define technical win themes, and align proposed solutions with client needs. Lead the technical discovery process , including stakeholder interviews, requirement elicitation, gap analysis, and risk identification. Design comprehensive cloud data architecture solutions , ensuring alignment with business goals and technical requirements. Develop Proofs of Concept (PoCs) , technical demos, and architecture diagrams to validate proposed solutions and build client confidence. Prepare and deliver technical presentations , RFP responses, and detailed proposals for client stakeholders, including C-level executives. Collaborate with internal teams (sales, product, delivery) to scope solutions , define SOWs, and transition engagements to the implementation team. Drive technical workshops and architecture review sessions with clients to ensure stakeholder alignment. Cloud Data Architecture Responsibilities: Deliver scalable and secure end-to-end cloud data solutions across AWS, GCP, and hybrid environments. Design and implement data warehouse architectures , data lakes, ETL/ELT pipelines, and real-time data streaming solutions. Provide technical leadership and guidance across multiple client engagements and industries. Leverage AI/ML capabilities to support data intelligence, automation, and decision-making frameworks. Apply cost optimization strategies , cloud-native tools, and best practices for performance tuning and governance. Qualifications: Required Skills & Experience: 8+ years of experience in data architecture , data modeling , and data management . Strong expertise in cloud-based data platforms (AWS/GCP), including data warehousing and big data tools. Proficient in SQL, Python , and at least one additional programming language (Java, C++, Scala, etc.). Knowledge of ETL/ELT pipelines , CI/CD , and automated delivery systems . Familiarity with NoSQL and SQL databases (e.g., PostgreSQL, MongoDB). Excellent presentation, communication, and interpersonal skills especially in client-facing environments. Proven success working with C-level executives and key stakeholders . Experience with data governance , compliance, and security in cloud environments. Strong problem-solving and analytical skills . Ability to manage multiple initiatives and meet tight deadlines in a fast-paced setting. Education: Bachelors degree in Computer Science, Information Systems, or related field (or equivalent experience required). Travel Expectation: Up to 15% for client engagements and technical workshops.
Posted 1 day ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
About Loti AI, Inc Loti AI specializes in protecting major celebrities, public figures, and corporate IP from online threats, focusing on deepfake and impersonation detection. Founded in 2022, Loti offers likeness protection, content location and removal, and contract enforcement across various online platforms including social media and adult sites. The company's mission is to empower individuals to control their digital identities and privacy effectively. We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications And Skills Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. Show more Show less
Posted 1 day ago
3.0 - 4.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Position: DevOps Engineer Experience: Minimum 3 to 4 Years Location: Client site - Belapur, Navi Mumbai Joining: Immediate Job Description As a DevOps Engineer at our organisation, you will play a crucial role in enhancing our development and deployment processes. You will work closely with development, QA, and operations teams to ensure seamless integration and delivery of high-quality software. Your expertise will contribute to the stability, scalability, and performance of our applications and infrastructure. Key Responsibilities 1. Infrastructure Management: Design, implement, and maintain scalable, secure, and reliable infrastructure on cloud platforms (AWS, Azure, GCP). 2. Continuous Integration/Continuous Deployment (CI/CD): Develop and maintain CI/CD pipelines to automate code deployment, testing, and delivery processes. 3. Monitoring and Logging: Implement monitoring, logging, and alerting solutions to ensure system performance, availability, and reliability. 4. Configuration Management: Use configuration management tools (Ansible, Puppet, Chef) to automate system setup, configuration, and updates. 5. Collaboration: Work closely with development teams to understand their needs and provide support for development, testing, and deployment environments. 6. Security: Implement security best practices and ensure compliance with industry standards. 7. Troubleshooting: Diagnose and resolve issues in development, testing, and production environments. 8. Documentation: Maintain detailed and accurate documentation of configurations, processes, and procedures. Technical Skills 1. Strong experience with cloud platforms (AWS, Azure, GCP, OCP). 2. Proficiency in scripting languages (Python, Bash, etc.). 3. Experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.). 4. Knowledge of containerization and orchestration (Docker, Kubernetes). 5. Familiarity with configuration management tools (Ansible, Puppet, Chef). 6. Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack). Soft Skills 1. Strong problem-solving and analytical skills. 2. Excellent communication and collaboration abilities. 3. Ability to work in a fast-paced and dynamic environment. Preferred Qualifications Certifications: Relevant certifications (AWS Certified DevOps Engineer, Docker Certified Associate, etc.) are a plus. Education: Bachelor's degree in Computer Science, Engineering, or any field is preferred. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-71493-1 Job Description Role Title : AVP, Enterprise Logging & Observability (L11) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Splunk is Synchrony's enterprise logging solution. Splunk searches and indexes log files and helps derive insights from the data. The primary goal is, to ingests massive datasets from disparate sources and employs advanced analytics to automate operations and improve data analysis. It also offers predictive analytics and unified monitoring for applications, services and infrastructure. There are many applications that are forwarding data to the Splunk logging solution. Splunk team including Engineering, Development, Operations, Onboarding, Monitoring maintain Splunk and provide solutions to teams across Synchrony. Role Summary/Purpose The role AVP, Enterprise Logging & Observability is a key leadership role responsible for driving the strategic vision, roadmap, and development of the organization’s centralized logging and observability platform. This role supports multiple enterprise initiatives including applications, security monitoring, compliance reporting, operational insights, and platform health tracking. This role lead platform development using Agile methodology, manage stakeholder priorities, ensure logging standards across applications and infrastructure, and support security initiatives. This position bridges the gap between technology teams, applications, platforms, cloud, cybersecurity, infrastructure, DevOps, Governance audit, risk teams and business partners, owning and evolving the logging ecosystem to support real-time insights, compliance monitoring, and operational excellence. Key Responsibilities Splunk Development & Platform Management Lead and coordinate development activities, ingestion pipeline enhancements, onboarding frameworks, and alerting solutions. Collaborate with engineering, operations, and Splunk admins to ensure scalability, performance, and reliability of the platform. Establish governance controls for source naming, indexing strategies, retention, access controls, and audit readiness. Splunk ITSI Implementation & Management - Develop and configure ITSI services, entities, and correlation searches. Implement notable events aggregation policies and automate response actions. Fine-tune ITSI performance by optimizing data models, summary indexing, and saved searches. Help identify patterns and anomalies in logs and metrics. Develop ML models for anomaly detection, capacity planning, and predictive analytics. Utilize Splunk MLTK to build and train models for IT operations monitoring. Security & Compliance Enablement Partner with InfoSec, Risk, and Compliance to align logging practices with regulations (e.g., PCI-DSS, GDPR, RBI). Enable visibility for encryption events, access anomalies, secrets management, and audit trails. Support security control mapping and automation through observability. Stakeholder Engagement Act as a strategic advisor and point of contact for business units, application, infrastructure, security stakeholders and business teams leveraging Splunk. Conduct stakeholder workshops, backlog grooming, and sprint reviews to ensure alignment. Maintain clear and timely communications across all levels of the organization. Process & Governance Drive logging and observability governance standards, including naming conventions, access controls, and data retention policies. Lead initiatives for process improvement in log ingestion, normalization, and compliance readiness. Ensure alignment with enterprise architecture and data classification models. Lead improvements in logging onboarding lifecycle time, automation pipelines, and selfservice ingestion tools. Mentor junior team members and guide engineering teams on secure, standardized logging practices. Required Skills/Knowledge Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Splunk Subject Matter Expert (SME) Strong hands-on understanding of Splunk architecture, pipelines, dashboards, and alerting, data ingestion, search optimization, and enterprise-scale operations. Experience supporting security use cases, encryption visibility, secrets management, and compliance logging. Splunk Development & Platform Management, Security & Compliance Enablement, Stakeholder Engagement & Process & Governance Experience with Splunk Premium Apps - ITSI and Enterprise Security (ES) minimally Experience with Data Streaming Platforms & tools like Cribl, Splunk Edge Processor. Proven ability to work in Agile environments using tools such as JIRA or JIRA Align. Strong communication, leadership, and stakeholder management skills. Familiarity with security, risk, and compliance standards relevant to BFSI. Proven experience leading product development teams and managing cross-functional initiatives using Agile methods. Strong knowledge and hands-on experience with Splunk Enterprise/Splunk Cloud. Design and implement Splunk ITSI solutions for proactive monitoring and service health tracking. Develop KPIs, Services, Glass Tables, Entities, Deep Dives, and Notable Events to improve service reliability for users across the firm Develop scripts (python, JavaScript, etc.) as needed in support of data collection or integration Develop new applications leveraging Splunk’s analytic and Machine Learning tools to maximize performance, availability and security improving business insight and operations. Support senior engineers in analyzing system issues and performing root cause analysis (RCA). Desired Skills/Knowledge Deep knowledge of Splunk development, data ingestion, search optimization, alerting, dashboarding, and enterprise-scale operations. Exposure to SIEM integration, security orchestration, or SOAR platforms. Knowledge of cloud-native observability (e.g. AWS/GCP/Azure logging). Experience in BFSI or regulated industries with high-volume data handling. Familiarity with CI/CD pipelines, DevSecOps integration, and cloud-native logging. Working knowledge of scripting or automation (e.g., Python, Terraform, Ansible) for observability tooling. Splunk certifications (Power User, Admin, Architect, or equivalent) will be an advantage . Awareness of data classification, retention, and masking/anonymization strategies. Awareness of integration between Splunk and ITSM or incident management tools (e.g., ServiceNow, PagerDuty) Experience with Version Control tools – Git, Bitbucket Eligibility Criteria Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Demonstrated success in managing large-scale logging platforms in regulated environments. Excellent communication, leadership, and cross-functional collaboration skills. Experience with scripting languages such as Python, Bash, or PowerShell for automation and integration purposes. Prior experience in large-scale, security-driven logging or observability platform development. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to interact effectively with team members and stakeholders. Knowledge of IT Service Management (ITSM) and monitoring tools. Knowledge of other data analytics tools or platforms is a plus. WORK TIMINGS : 01:00 PM to 10:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L09+ Employees can apply. Level / Grade : 11 Job Family Group Information Technology Show more Show less
Posted 1 day ago
7.0 - 12.0 years
22 - 37 Lacs
Hyderabad
Work from Office
Role & responsibilities Java Developer - 7+ yrs Experience in Java Spring boot, microservices GCP Cloud is MUST Please share your updated profiles to binni.sharma@mounttalent.com or else whatsapp on 8800662549
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior Engineer, AVP Location: Pune, India Role Description We are seeking a Data Security Engineer to design, implement and manage security measures that protect sensitive data across our organization. This role focusses on the execution and delivery of Data Security solutions, focusing on configuration, engineering, and integration within a complex enterprise environment. While the role operates within Cybersecurity the person will collaborate with IT, Risk Management, and Business Units on a case-by-case basis, delving Data Loss prevention solutions. The ideal candidate understands and manages the existing tool stack within a complex environment, navigates through technical integration challenges and supports the transition from legacy solutions to new solutions within the pillar and across different areas of the bank. This role will work on specific tools like Symantec DLP, Zscaler but require the flexibility to evaluate and integrate new solutions like PaloAlto, Fortinet, Microsoft Purview and capabilities in existing cloud security solutions like Azure/GCP. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Policy Development and Implementation: Design and implement data loss prevention policies, standards, and procedures to protect sensitive data from unauthorized access and disclosure. Risk Assessment: Conduct regular assessments of our implementation to identify vulnerabilities and potential threats to the organization's data. Develop strategies to mitigate identified risks. DLP Solutions: Evaluate, deploy, and manage DLP solutions and technologies. Ensure that these tools are effectively integrated and configured to protect sensitive data across the organization. Monitoring and Analysis: Monitor data movement and usage to detect and respond to potential data breaches or policy violations. Analyse incidents to identify root causes and develop corrective actions. Collaboration: Work with IT, legal, and business teams to ensure that DLP measures align with organizational goals and regulatory requirements. Provide guidance and support to stakeholders on data protection issues. Design and Implement data security frameworks, including encryption, tokenization and anonymization techniques within a hybrid environment Implement cloud-native security controls (e.g., CASB, CSPM, DSPM) to protect data in SaaS, IaaS, and PaaS environments. Implement Digital Rights Management, encryption and tokenization strategies and solutions to protect data in hybrid environments and prevent unauthorized access and disclosure. Deploy and manage data discovery & classification tools to identify sensitive data across structured and unstructured sources. Implement automated classification and labeling strategies for compliance and risk reduction. Your Skills And Experience Technical Expertise 5+ years of hands-on experience in Data Security, Information Protection, or Cloud Security. Strong expertise in delivering Data Security platforms (Symantec, Netskope, Zscaler, PaloAlto, Fortinet, etc.). Knowledge of Cloud Service Provisioning and experience with Cloud Security (AWS, Azure, GCP) and SaaS data protection solutions. Experience with Cloud Security (CASB), SaaS Security Posture Management (SSPM), Data Security Posture Management (DSPM). Proficiency in network security, endpoint protection, and identity & access management (IAM). Scripting knowledge (Python, PowerShell, APIs) for security automation are a plus. Hands-on experience with AI/ML and data security related remediations are a plus. Soft Skills & Collaboration Strong problem-solving and analytical skills to assess security threats and data exposure risks. Ability to work cross-functionally with Security, IT, and Risk teams. Effective written and verbal communication skills, especially when documenting security configurations and investigations. Professional certifications such as CISSP, CISM, CCSP, GIAC (GCIH, GCFA), or CEH. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title- Corporate Bank Technology – Commercial Banking – Data Engineer Location- Pune, India Role Description Responsible to provide fast and reliable data solutions for warehousing, reporting, Customer- and Business Intelligence solutions. Loading data from various systems of record into our platform and make them available for further use. Automate deployment and test processes to deliver fast incremental improvements of our application and platform. Transform and combine data into a data model which supporting our data analysts or can easily consumed by operational databases. Create the best code to fulfill the requirements of our business unit and support our customers with the best possible products Maintain hygiene, Risk and Control and Stability at to core to every delivery. Work in an agile setup, helping with feedback to improve our way of working. Commercial Banking Tribe You’ll be joining the Commercial Bank Tribe, who is focusing on the special needs of the small and medium enterprise clients in Germany, a designated area for further growth and investment within Corporate Bank. We are responsible for the digital transformation of :800.000 clients in 3 brands, i.e. the establishment of the BizBanking platform including development of digital sales and service processes as well as the automation of processes for this client segment. Our tribe is on a journey of an extensive digitalisation of business processes and to migrate our applications to the cloud. On that we are working jointly together with our business colleagues in an agile setup and collaborating closely with stakeholders and engineers from other areas thriving to achieve a highly automated and adoptable process and application landscape. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Design, develop, and deploy data processing pipelines and data-driven applications on GCP Write and maintain SQL queries and use data modeling tools like Dataform or dbt for data management. Write clean, maintainable code in Java and/or Python, adhering to clean code principles. Apply concepts of deployments and configurations in GKE/OpenShift, and implement infrastructure as code using Terraform. Set up and maintain CI/CD pipelines using GitHub Actions, write and maintain unit and integration tests. Your Skills And Experience Bachelor's degree in Computer Science, Data Science, or related field, or equivalent work experience. Strong experience with Cloud, Terraform, and GitHub Actions. Proficiency in SQL and Java and/or Python, experience with tools and frameworks like Apache Beam, Spring Boot and Apache Airflow. Familiarity with data modeling tools like dbt or dataform, and experience writing unit and integration tests. Understanding of clean code principles and commitment to writing maintainable code. Excellent problem-solving skills, attention to detail, and strong communication skills. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Cloud Certification strongly preferred Show more Show less
Posted 1 day ago
10.0 - 20.0 years
35 - 50 Lacs
Thane, Pune, Mumbai (All Areas)
Work from Office
We’re hiring a DevOps Head (10+ yrs exp) for our client in Mumbai/Pune. Hybrid role. Must have AWS/Azure, CI/CD, Terraform, Kubernetes & leadership experience. Share CV + CTC/NP/Location details. Apply now if you’re ready to lead at scale!
Posted 1 day ago
6.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Job Title: Data Scientist Location: Bangalore Business & Team: BB Advanced Analytics and Artificial Intelligence COE Impact & contribution: As a Senior Data Scientist, you will be instrumental in pioneering Gen AI and multi-agentic systems at scale within CommBank. You will architect, build, and operationalize advanced generative AI solutions—leveraging large language models (LLMs), collaborative agentic frameworks, and state-of-the-art toolchains. You will drive innovation, helping set the organizational strategy for advanced AI, multi-agent collaboration, and responsible next-gen model deployment. Roles & Responsibilities: Gen AI Solution Development: Lead end-to-end development, fine-tuning, and evaluation of state-of-the-art LLMs and multi-modal generative models (e.g., transformers, GANs, VAEs, Diffusion Models) tailored for financial domains. Multi-Agentic System Engineering: Architect, implement, and optimize multi-agent systems, enabling swarms of AI agents (utilizing frameworks like Lang chain, Lang graph, and MCP) to dynamically collaborate, chain, reason, critique, and autonomously execute tasks. LLM-Backed Application Design: Develop robust, scalable GenAI-powered APIs and agent workflows using Fast API, Semantic Kernel, and orchestration tools. Integrate observability and evaluation using Lang fuse for tracing, analytics, and prompt/response feedback loops. Guardrails & Responsible AI: Employ frameworks like Guardrails AI to enforce robust safety, compliance, and reliability in LLM deployments. Establish programmatic checks for prompt injections, hallucinations, and output boundaries. Enterprise-Grade Deployment: Productionize and manage at-scale Gen AI and agent systems with cloud infrastructure (GCP/AWS/Azure), utilizing model optimization (quantization, pruning, knowledge distillation) for latency/throughput trade offs. Toolchain Innovation: Leverage and contribute to open source projects in the Gen AI ecosystem (e.g., Lang Chain, Lang Graph, Semantic Kernel, Lang fuse, Hugging face, Fast API). Continuously experiment with emerging frameworks and research. Stakeholder Collaboration: Partner with product, engineering, and business teams to define high-impact use cases for Gen AI and agentic automation; communicate actionable technical strategies and drive proof-of-value experiments into production. Mentorship & Thought Leadership: Guide junior team members in best practices for Gen AI, prompt engineering, agentic orchestration, responsible deployment, and continuous learning. Represent CommBank in the broader AI community through papers, patents, talks, and open-source. Essential Skills: 6+ years of hands-on experience in Machine Learning, Deep Learning, or Generative AI domains, including practical expertise with LLMs, multi-agent frameworks, and prompt engineering. Proficient in building and scaling multi-agent AI systems using Lang Chain, Lang Graph, Semantic Kernel, MCP, or similar agentic orchestration tools. Advanced experience developing and deploying Gen AI APIs using Fast API; operational familiarity with Lang fuse for LLM evaluation, tracing, and error analytics. Demonstrated ability to apply Guardrails to enforce model safety, explainability, and compliance in production environments. Experience with transformer architectures (BERT/GPT, etc.), fine-tuning LLMs, and model optimization (distillation/quantization/pruning). Strong software engineering background (Python), with experience in enterprise-grade codebases and cloud-native AI deployments. Experience integrating open and commercial LLM APIs and building retrieval-augmented generation (RAG) pipelines. Exposure to agent-based reinforcement learning, agent simulation, and swarm-based collaborative AI. Familiarity with robust experimentation using tools like Lang Smith, GitHub Copilot, and experiment tracking systems. Proven track record of driving Gen AI innovation and adoption in cross-functional teams. Papers, patents, or open-source contributions to the Gen AI/LLM/Agentic AI ecosystem. Experience with financial services or regulated industries for secure and responsible deployment of AI. Education Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology. If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 01/07/2025 Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Engineer Location: Bangalore About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Data Engineer on our newly formed Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. You will play a critical role in the implementation of Data Warehousing and Data Lake solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a dedicated team, leveraging your skills in the data engineering area to build solutions and drive innovation forward. ”. What You’ll Contribute Perform hands-on analysis, technical design, solution architecture, prototyping, proofs-of-concept, development, unit and integration testing, debugging, documentation, deployment/migration, updates, maintenance, and support on Data Platform technologies. Design, develop, and maintain robust, scalable data pipelines for batch and real-time processing using modern tools like Apache Spark, Kafka, Airflow, or similar. Build efficient ETL/ELT workflows to ingest, clean, and transform structured and unstructured data from various sources into a well-organized data lake or warehouse. Manage and optimize cloud-based data infrastructure on platforms such as AWS (e.g., S3, Glue, Redshift, RDS) or Snowflake. Collaborate with cross-functional teams to understand data needs and deliver reliable datasets that support analytics, reporting, and machine learning use cases. Implement and monitor data quality, validation, and profiling processes to ensure the accuracy and reliability of downstream data. Design and enforce data models, schemas, and partitioning strategies that support performance and cost-efficiency. Develop and maintain data catalogs and documentation, ensuring data assets are discoverable and governed. Support DevOps/DataOps practices by automating deployments, tests, and monitoring for data pipelines using CI/CD tools. Proactively identify data-related issues and drive continuous improvements in pipeline reliability and scalability. Contribute to data security, privacy, and compliance efforts, implementing role-based access controls and encryption best practices. Design scalable architectures that support FICO’s analytics and decisioning solutions Partner with Data Science, Analytics, and DevOps teams to align architecture with business needs. What We’re Seeking 7+ years of hands-on experience as a Data Engineer working on production-grade systems. Proficiency in programming languages such as Python or Scala for data processing. Strong SQL skills, including complex joins, window functions, and query optimization techniques. Experience with cloud platforms such as AWS, GCP, or Azure, and relevant services (e.g., S3, Glue, BigQuery, Azure Data Lake). Familiarity with data orchestration tools like Airflow, Dagster, or Prefect. Hands-on experience with data warehousing technologies like Redshift, Snowflake, BigQuery, or Delta Lake. Understanding of stream processing frameworks such as Apache Kafka, Kinesis, or Flink is a plus. Knowledge of data modeling concepts (e.g., star schema, normalization, denormalization). Comfortable working in version-controlled environments using Git and managing workflows with GitHub Actions or similar tools. Strong analytical and problem-solving skills, with the ability to debug and resolve pipeline and performance issues. Excellent written and verbal communication skills, with an ability to collaborate across engineering, analytics, and business teams. Demonstrated technical curiosity and passion for learning, with the ability to quickly adapt to new technologies, development platforms, and programming languages as needed. Bachelor’s in computer science or related field Exposure to MLOps pipelines MLflow, Kubeflow, Feature Stores is a plus but not mandatory Engineers with certifications will be preferred Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. Proactive collaboration in the project teams to help develop the product using your experience to help guide the team through the whole development lifecycle. Hands on with Planning, estimating, contributing to the architecture, coding, development. Refactoring and continuous improvements of the code bases is vital. Focus on SW quality and delivering quality throughout the whole process. Ensuring that technical decisions and information is communicated thorough all related teams. Taking responsibility of releases and contributing to the ongoing support of the live features. Experienced in Continuous Delivery practices and how they affect product quality and delivery. We promote a DevOps culture so you will need to look beyond pure programming and get involved with the deployment and operation of the software we build. Requirements To be successful in this role, you should meet the following requirements: Solid experience in UI Engineering with React/Angular with 3+ years of experience. Candidate with strong hands-on HTML5, CSS, Typescripts Hands on experience on using React to develop web applications and create common components. Solid hands on development and troubleshooting skills with some expertise on Spring boot API, Splunk logs. Very good with UI and Core Architectural Design patterns. Solid experience in writing Unit Tests and UI tests, must be familiar with Junit, and integrate those with Jenkins Pipeline. Experience with source code versioning tools, specifically Github command line. Familiar with security concept , Devops Familiar with any Cloud Technology AWS or GCP. You’ll achieve more when you join HSBC wwwhsbccom/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for Google Cloud Platform (GCP) professionals in India is rapidly growing as more and more companies are moving towards cloud-based solutions. GCP offers a wide range of services and tools that help businesses in managing their infrastructure, data, and applications in the cloud. This has created a high demand for skilled professionals who can work with GCP effectively.
The average salary range for GCP professionals in India varies based on experience and job role. Entry-level positions can expect a salary range of INR 5-8 lakhs per annum, while experienced professionals can earn anywhere from INR 12-25 lakhs per annum.
Typically, a career in GCP progresses from a Junior Developer to a Senior Developer, then to a Tech Lead position. As professionals gain more experience and expertise in GCP, they can move into roles such as Cloud Architect, Cloud Consultant, or Cloud Engineer.
In addition to GCP, professionals in this field are often expected to have skills in: - Cloud computing concepts - Programming languages such as Python, Java, or Go - DevOps tools and practices - Networking and security concepts - Data analytics and machine learning
As the demand for GCP professionals continues to rise in India, now is the perfect time to upskill and pursue a career in this field. By mastering GCP and related skills, you can unlock numerous opportunities and build a successful career in cloud computing. Prepare well, showcase your expertise confidently, and land your dream job in the thriving GCP job market in India.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.