Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 13.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Senior Tech Lead Experience Required: 10-13 years Location : Noida Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data. Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools. Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services. Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products. Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize. Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services. Proven experience of Containers, GCP, AWS Cloud platforms. Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns. Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus. Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering. Good understanding Distributed Architecture, plug-in and APIs. Prior Experience with Security, Cloud, and Container Security possess great advantage. Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services. Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 2 days ago
10.0 - 13.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Senior Tech Lead Experience Required: 10-13 years Location : Noida Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services Proven experience of Containers, GCP, AWS Cloud platforms Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering Good understanding Distributed Architecture, plug-in and APIs Prior Experience with Security, Cloud, and Container Security possess great advantage Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 2 days ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Forma.ai Forma.ai is a Series B startup that's revolutionizing how sales compensation is designed, managed and optimized. We handle billions in annual managed commissions for market leaders like Edmentum, Stryker, and Autodesk. Our growth has been fuelled by our passion for fundamentally changing and shaping how companies use sales intelligence to drive business strategy. We’re welcoming equally driven individuals who are excited about creating something big! About The Team The Customer Operations team is at the heart of Forma.ai's mission. This team has a direct impact on the growth of Forma.ai. They are results-driven and solutions-minded. The Customer Operations team works closely with our customers, helping them to understand and take advantage of all the features Forma.ai offers and ensuring that they get the most value from the platform. What You'll Be Doing Reporting & Dashboarding Design, maintain, and enhance dashboards in BI tools (e.g., Looker Studio, Salesforce/HubSpot reports) to monitor marketing campaign performance, sales pipeline health, lead flow, and conversion metrics Automate recurring reports and implement self-serve analytics capabilities for GTM teams Data Analysis & Insights Analyze funnel performance from top-of-funnel marketing campaigns to bottom-of-funnel sales outcomes Provide regular insights into key KPIs like campaign ROI, customer acquisition cost (CAC), and attribution across channels Support A/B testing initiatives, sales activity analysis, and segmentation strategies Scripting & Automation Write Python and SQL scripts to extract, clean, and structure data from external sources (e.g., job boards, press releases, M&A feeds, web scraping APIs) Build automated enrichment pipelines to augment CRM and marketing data with third-party insights (e.g., firmographics, hiring activity, technology stack, funding events) Data Quality & Tooling Take ownership of data cleanliness and integrity across GTM systems (e.g., Salesforce, HubSpot, Databricks), including investigating issues, proposing solutions, and manually resolving historical data problems when necessary Maintain and improve key GTM logic — including lead scoring models, lifecycle stage transitions, attribution frameworks, and related automation rules — ensuring they are well-defined, consistent, and actionable What We're Looking For Background in Engineering, Commerce, Mathematics and/or Statistics Natural curiosity about AI and emerging technologies — especially where they intersect with automation, data, and workflow orchestration across the GTM stack Familiarity with B2B go-to-market motions and how to measure their effectiveness Experience in report building, data analysis and workflow automation within a GTM tech stack (e.g. Salesforce, HubSpot, Gong, BI tools) Proficiency in SQL or Python and familiarity with at least one BI tool (e.g., Power BI, Looker, Tableau) 3-5 years of related experience High achiever with a strong sense of ownership Ability to take ownership and run tasks in a fast-paced and evolving environment Our Values Work well, together. We’re real. We have kids and pets. Mortgages and student loans. We’re in this together, so no matter how brilliant any one of us is, we always play nice with one another – no exceptions. Be precise. Be relentless. We believe complacency breeds failure, so we set new goals as quickly as we achieve them. We persist in the face of adversity, learn from our mistakes, and push each other to continuously improve. The status-quo is kryptonite. Love our tech. Love our customers. Our platform solves a very complex problem in a currently underserved market. While everyone at Forma isn’t customer-facing, we’re all customer-focused. Maybe even slightly customer-obsessed. Our Commitment To You We know that applying to a new role takes a lot of effort. You're encouraged to apply even if your experience doesn't precisely match the job description. There are many paths to a successful career and we’re looking forward to reading yours. We thank all applicants for their interest.
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler. Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast pace. A few things to know about us: We put our customers first . When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently. As a pioneer in a highly competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity. We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential. We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners. Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Position Title: Tech Lead Experience Required: 6-10 years Job Description Essential Duties & Responsibilities: Research, design and development of next generation Applications supporting billions of transactions and data. Design and development of cloud-based solutions with extensive hand-on experience on Big Data, distributed programming, ETL workflows and orchestration tools. Design and development of microservices in Java / J2EE, Node JS, with experience in containerization using Docker, Kubernetes Focus should be on developing cloud native applications utilizing cloud services. Work with product managers/owners & internal as well as external customers following Agile methodology Practice rapid iterative product development to mature promising concepts into successful products. Execute with a sense of urgency to drive ideas into products through the innovation life-cycle, , demo/evangelize. Should be experienced in using GenAI for faster development Skills Required Proven experience of development of high performing, scalable cloud applications using various cloud development stacks & services. Proven experience of Containers, GCP, AWS Cloud platforms. Deep skills in Java / Python / Node JS / SQL / PLSQL Working experience with Spring boot, ORM, JPA, Transaction Management, Concurrency, Design Patterns. Good Understanding of NoSQL databases like MongoDB. Experience on workflow and orchestration tools like NiFi, Airflow would be big plus. Deep understanding of best design and software engineering practices design principles and patterns, unit testing, performance engineering. Good understanding Distributed Architecture, plug-in and APIs. Prior Experience with Security, Cloud, and Container Security possess great advantage. Hands-on experience in building applications on various platforms, with deep focus on usability, performance and integration with downstream REST Web services. Exposure to Generative AI models, prompt engineering, or integration with APIs like OpenAI, Cohere, or Google Gemini Qualifications/Requirements B. Tech/Masters in Computer Science/Engineering, Electrical/Electronic Engineering. Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer.
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
As a Technical Sales Specialist for Cognitive Network Solutions (CNS) in the Global PreSales Center - Bid Office Organization, you will be responsible for providing technical expertise and domain knowledge during the sales cycle for solutions related to the RAN Intelligent Controller (RIC) and rApps. Your primary role will involve closely collaborating with Sales, Product, and engineering teams to demonstrate technical feasibility and ensure that proposed solutions align with customer requirements. Your key responsibilities will include: - Providing high-quality responses for RFx/tenders within specified timelines. - Developing solution architectures and responding to RFPs/RFIs for rApp-based offerings. - Supporting pre-sales RFX deliverables such as Statement of Compliance (SoC), Dimensioning, HW-SW configuration in ECP/ACS, Solution description, and services estimates. - Aligning solution and proposals strategy and providing support to Customer Units (CUs). - Engaging with customer units during RFX to understand their business and technical needs related to RAN optimization and automation. - Articulating the technical capabilities and business benefits of rApps. - Creating technical documentation, including solution blueprints, architecture diagrams, and integration plans. - Collaborating with cross SA (NM)/BA (BNEW) to tailor rApp solutions based on customer requirements. - Working with product management to influence the rApp product roadmap based on market feedback. To be successful in this role, you should possess: - Good understanding of the sales process. - Knowledge of Service Management and Orchestration (SMO) and Non-RT RIC. - Familiarity with AI/ML frameworks and APIs used in rApp development. - Understanding of containerization (Docker, Kubernetes), CI/CD pipelines, OSS/BSS integration, and RAN analytics. - Deep knowledge of O-RAN specifications, RIC architecture, and the rApp ecosystem. - Awareness of industry trends in 5G, AI/ML for RAN, and O-RAN Alliance developments. Preferred Skills: - Interpersonal skills. - Presentation and communication skills. - Teamwork and collaboration. - Analytical thinking. - Relating and networking. - Delivery results and customer expectations. - Adapting and responding to change. - Ability to meet tight deadlines. - Problem-solving and strategic thinking. - Ability to work independently and across cross-functional teams. - At least 6-10 years of experience in the Telecom industry. - Bachelor's degree in computer science, electronics engineering, or a related field. - Fluency in English for oral and written communication. Preferred Certifications (Optional): - O-RAN Alliance training/certification. - TM Forum or similar telecom standards certifications. - Cloud certifications (AWS/GCP/Azure) are a plus.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Network Security Engineer at Zinnia, you will play a crucial role in managing and optimizing Zscaler Internet Access (ZIA) and Zscaler Private Access (ZPA) across the enterprise. Your responsibilities will include configuring Zscaler policies, supporting Cisco Meraki infrastructure, and architecting AWS networking. You will be expected to lead network security initiatives, implement secure access strategies, and collaborate with various teams to enhance data protection standards. Your primary duties will involve designing and deploying Zscaler solutions, managing policies such as SSL inspection and URL filtering, and serving as the subject matter expert for Zscaler integrations. Additionally, you will configure Cisco Meraki networking hardware, architect AWS networking components, and develop network segmentation strategies using both cloud and on-premises tools. Automation of network management tasks and collaboration with security and infrastructure teams to enforce zero trust architecture will be key aspects of your role. To succeed in this position, you should have at least 5 years of experience in network engineering or security roles, with a minimum of 2 years of hands-on experience with Zscaler platforms. Proficiency in managing Cisco Meraki devices, understanding of AWS networking principles, and knowledge of network security concepts are essential. Scripting experience for automation, excellent communication skills, and the ability to lead projects and mentor junior team members will be beneficial. While certifications such as Zscaler Certified Cloud Professional (ZCCP-IA / ZCCP-PA), Cisco Meraki CMNA, AWS Certified Advanced Networking - Specialty, or CISSP are preferred, they are not mandatory for this role. This position offers you the opportunity to work in a dynamic environment, collaborate with diverse teams, and contribute to enhancing network security and visibility. If you are a hands-on engineer with a strong security instinct, problem-solving mindset, and a desire to work with cutting-edge technologies, we encourage you to apply for the Network Security Engineer position at Zinnia. Join us in our mission to simplify the experience of buying, selling, and administering insurance products, and help more people protect their financial futures.,
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Ansible on Microsoft Azure, Terraform, Jenkins Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while collaborating with various teams to enhance the development process and streamline operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and optimize the performance of CI/CD pipelines. - Must To Have Skills: Proficiency in DevOps, EKS, Helm Charts, Ansible, Terraform and Docker Skills. - Experience and skills in setup the infrastructure on AWS cloud with EKS, Helm charts. - Proficient in developing CI/CD pipelines using Jenkins/Github or other CI/CD tool.s - Ability to debug and fix the issues in environment setup and in CI/CD pipelines. - Knowledge and experience doing automation of infra and application setup using Ansible and Terraform. - Good To Have Skills: Experience with continuous integration and continuous deployment tools. - Strong understanding of cloud services and infrastructure management. - Familiarity with containerization technologies such as Openshift or other containerization technical skills - Experience in scripting languages for automation and configuration management. Additional Information: - The candidate should have minimum 5 years of experience in DevOps. - This position is based in Hyderabad. - A 15 years full time education is required.
Posted 3 days ago
8.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
Remote
About The Company Armada is an edge computing startup that provides computing infrastructure to remote areas where connectivity and cloud infrastructure is limited, as well as areas where data needs to be processed locally for real-time analytics and AI at the edge. We’re looking to bring on the most brilliant minds to help further our mission of bridging the digital divide with advanced technology infrastructure that can be rapidly deployed anywhere . About The Role We are seeking an experienced Product Manager to join our Platform team at Armada.ai. In this role, you'll partner closely with a senior product leader to drive development of core platform capabilities that power our enterprise SaaS and intelligent infrastructure stack. This includes systems like access and user management , alerting/notifications , and broader platform services that are foundational to secure, scalable, and intelligent operations across our global deployments. You’ll be responsible for turning cross-functional needs into platform-level solutions that are secure, intuitive, and extensible across cloud and edge environments. Location. This role is office-based at our Trivandrum, Kerala office. What You’ll Own Platform Product Ownership: Collaborate on the strategy and roadmap for internal platform features including access management, notification infrastructure, and orchestration tools. Enterprise SaaS Enablement: Design capabilities that scale across multi-tenant organizations, internal tools, and customer-facing systems. Cross-Team Collaboration: Work across engineering, architecture, and AI teams to develop cohesive and future-ready product capabilities. User & Stakeholder Discovery: Identify pain points across internal and external users to inform roadmap decisions and prioritize what matters. Data & AI Integration Readiness: Ensure that all platform tools are designed to support intelligence, automation, and observability. What You'll Do (Key Responsibilities) Drive the planning and execution of platform features across multiple workstreams Define detailed product requirements, workflows, and acceptance criteria Own backlog grooming, prioritization, and stakeholder alignment Collaborate with designers, engineers, and architects to deliver platform solutions on schedule Support rollout strategies and internal enablement for new platform features Track adoption, gather feedback, and iterate based on insights and usage data Communicate platform vision and progress to leadership and stakeholders Required Qualifications 6–8 years of experience in Product Management, preferably in SaaS, cloud, or enterprise software Experience working on platform or internal tools with technical and cross-functional complexity Familiarity with enterprise application patterns such as access control, notifications, or multi-tenant systems Strong understanding of the product development lifecycle, from discovery to delivery Excellent communication, collaboration, and problem-solving skills Comfortable working with technical teams including engineers, architects, and AI/data leads Strong organizational skills and ability to manage multiple priorities simultaneously Preferred Qualifications Exposure to data platforms, AI/ML product readiness, or automation tools Experience with infrastructure services, observability systems, or alerting platforms Familiarity with concepts like edge computing, identity management, or internal platform APIs Experience in fast-paced, cross-functional product environments Ability to work with global teams and distributed stakeholders Compensation For India-based candidates: We offer a competitive base salary along with equity options, providing an opportunity to share in the success and growth of Armada. You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time.
Posted 3 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Project Role : Cloud Platform Engineer Project Role Description : Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance. Must have skills : Data Modeling Techniques and Methodologies Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Cloud Platform Engineer, you will engage in the design, construction, testing, and deployment of cloud application solutions that seamlessly integrate both cloud and non-cloud infrastructures. Your typical day will involve collaborating with cross-functional teams to ensure the architecture's viability, security, and performance, while also creating proofs of concept to validate your designs. You will be responsible for deploying infrastructure and platform environments, ensuring that all components work harmoniously to meet organizational goals and client needs. Your role will require a proactive approach to problem-solving and a commitment to delivering high-quality solutions in a dynamic environment. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Collaborate with team members to identify and address potential challenges in cloud application deployment. - Develop and maintain documentation related to cloud architecture and deployment processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Modeling Techniques and Methodologies. - Good To Have Skills: Experience with cloud service providers such as AWS, Azure, or Google Cloud Platform. - Strong understanding of cloud architecture principles and best practices. - Experience with infrastructure as code tools like Terraform or CloudFormation. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. Additional Information: - The candidate should have minimum 3 years of experience in Data Modeling Techniques and Methodologies. - This position is based in Pune. - A 15 years full time education is required.
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving discussions, contribute to the overall project strategy, and continuously refine your skills to enhance application performance and user experience. Roles & Responsibilities: The Offshore Data Engineer plays a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. Working closely with onshore data architects and analysts, this role ensures high data quality, performance, and reliability across distributed systems. The engineer is expected to demonstrate technical proficiency, proactive problem-solving, and strong collaboration in a remote environment. -Design and develop robust ETL/ELT pipelines to ingest, transform, and load data from diverse sources. -Collaborate with onshore teams to understand business requirements and translate them into scalable data solutions. -Optimize data workflows through automation, parallel processing, and performance tuning. -Maintain and enhance data infrastructure including data lakes, data warehouses, and cloud platforms (AWS, Azure, GCP). -Ensure data integrity and consistency through validation, monitoring, and exception handling. -Contribute to data modeling efforts for both transactional and analytical use cases. -Deliver clean, well-documented datasets for reporting, analytics, and machine learning. -Proactively identify opportunities for cost optimization, governance, and process automation. Professional & Technical Skills: - Programming & Scripting: Proficiency in Databricks with SQL and Python for data manipulation and pipeline development. - Big Data Technologies: Experience with Spark, Hadoop, or similar distributed processing frameworks. -Workflow Orchestration: Hands-on experience with Airflow or equivalent scheduling tools. -Cloud Platforms: Strong working knowledge of cloud-native services (AWS Glue, Azure Data Factory, GCP Dataflow). -Data Modeling: Ability to design normalized and denormalized schemas for various use cases. -ETL/ELT Development: Proven experience in building scalable and maintainable data pipelines. -Monitoring & Validation: Familiarity with data quality frameworks and exception handling mechanisms. Good To have Skills -DevOps & CI/CD: Exposure to containerization (Docker), version control (Git), and deployment pipelines. -Data Governance: Understanding of metadata management, lineage tracking, and compliance standards. -Visualization Tools: Basic knowledge of BI tools like Power BI, Tableau, or Looker. -Machine Learning Support: Experience preparing datasets for ML models and feature engineering. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Chennai office. - A 15 years full time education is required., 15 years full time education
Posted 3 days ago
7.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 7.5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 3 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Ansible on Microsoft Azure, Terraform, Jenkins Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while collaborating with various teams to enhance the development process and streamline operations. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and optimize the performance of CI/CD pipelines. - Must To Have Skills: Proficiency in DevOps, EKS, Helm Charts, Ansible, Terraform and Docker Skills. - Experience and skills in setup the infrastructure on AWS cloud with EKS, Helm charts. - Proficient in developing CI/CD pipelines using Jenkins/Github or other CI/CD tool.s - Ability to debug and fix the issues in environment setup and in CI/CD pipelines. - Knowledge and experience doing automation of infra and application setup using Ansible and Terraform. - Good To Have Skills: Experience with continuous integration and continuous deployment tools. - Strong understanding of cloud services and infrastructure management. - Familiarity with containerization technologies such as Openshift or other containerization technical skills - Experience in scripting languages for automation and configuration management. Additional Information: - The candidate should have minimum 5 years of experience in DevOps. - This position is based in Hyderabad. - A 15 years full time education is required.
Posted 3 days ago
5.0 years
20 - 45 Lacs
Chennai, Tamil Nadu, India
On-site
Company: Element34 Website: Visit Website Business Type: Enterprise Company Type: Product Business Model: B2B Funding Stage: Bootstrapped Industry: Software Testing Salary Range: ₹ 20-45 Lacs PA Job Description About Element34 Element34 is the leading provider of managed enterprise testing grids deployed inside secure corporate networks. Our flagship solution, SeleniumBox (SBOX) , is built by Selenium experts and trusted by top global financial institutions, government agencies, and tech companies for its security, scalability, and performance. We are a Banyan portfolio company committed to delivering enterprise-grade test automation infrastructure. About The Role We are looking for a Full Stack Engineer who will predominantly focus on backend development (60%) while also contributing to the frontend (40%). You will build secure, scalable, high-performance backend systems in Java , and work closely with React/Angular teams to create seamless web experiences. Key Responsibilities Develop backend services in Java, supporting REST APIs and microservices architecture Collaborate with frontend engineers using React or Angular Manage relational (SQL) and NoSQL databases Implement containerization and orchestration using Docker and Kubernetes Deploy and scale applications on cloud platforms like AWS, Azure, or Google Cloud Write and maintain unit tests; follow TDD best practices Participate in code reviews and mentor junior developers Drive backend best practices and performance optimizations Required Experience 5+ years backend development experience in Java (must-have) Experience with frontend frameworks: React or Angular Proficient in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB) Hands-on experience with REST APIs, Docker, Kubernetes, and cloud platforms Understanding of Agile/Scrum development Strong testing background (unit tests, TDD) Nice To Have Experience with automation testing frameworks (Selenium, Cypress) SaaS environment experience Join Element34 to shape the future of secure test automation infrastructure for the world’s most demanding enterprises.
Posted 3 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Python (Programming Language) Good to have skills : Generative AI Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are seeking a highly motivated and technically skilled GenAI & Prompt Engineering Specialist to join our Automation & Asset Development & Deployment team. This role will focus on designing, developing, and optimizing generative AI solutions using Python and large language models (LLMs). You will be instrumental in building intelligent automation workflows, refining prompt strategies, and ensuring scalable, secure AI deployments. Roles & Responsibilities: -• Design, test, and optimise prompts for LLMs to support use cases which benefit the infra & application managed services. • Build and maintain Python-based microservices and scripts for data processing, API integration, and model orchestration. • Collaborate with SMEs to convert business requirements into GenAI-powered workflows, including chunking logic, token optimisation, and schema transformation. • Work with foundation models and APIs (e.g., OpenAI, Vertex AI, Claude Sonnet) to embed GenAI capabilities into enterprise platforms. • Ensure all AI solutions comply with internal data privacy, PII masking, and security standards. • Conduct A/B testing of prompts, evaluate model outputs, and iterate based on SME feedback. • Maintain clear documentation of prompt strategies, model behaviors, and solution architectures. Professional & Technical Skills: • Strong proficiency in Python, including experience with REST APIs, data parsing, and automation scripting. • Deep understanding of LLMs, prompt engineering, and GenAI frameworks (e.g., LangChain, RAG pipelines). • Familiarity with data modelling, SQL, and RDBMS concepts. • Experience with agentic workflows, token optimization, and schema chunking. Additional Information: - The candidate should have minimum 5 years of experience in Python (Programming Language). - This position is based at our Noida office. - A 15 years full time education is required.
Posted 3 days ago
14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Head Digital Works is a pioneering force in Indian online skill gaming, evolving from a 2006 garage startup to a leader with over 80 million users and brands like A23 Rummy, A23 Poker, and Adda52. Over nearly two decades, it has shaped India’s real money gaming market through innovation, player safety, and exceptional user experiences. Focused on sustainable growth and trust-driven relationships, HDW continues to invest in technology and talent to build immersive gaming ecosystems—and drive the future of digital entertainment in India. Role Overview: We’re looking for a seasoned and visionary leader to spearhead our cloud infrastructure function with deep expertise in DevOps, DevSecOps, and AWS technologies. This pivotal role blends strategic foresight, technical excellence, and a collaborative spirit to build secure, scalable, and innovative infrastructure that powers our enterprise. What You’ll Lead & Shape: Leadership & Strategy Define and deliver the strategic roadmap for cloud infrastructure, DevOps, and DevSecOps in alignment with business priorities. Lead a high-impact team of cloud engineers, SREs, and DevOps professionals. Champion infrastructure modernization, cloud-native transformation, and cost-efficient practices. Cloud Infrastructure (AWS) Architect multi-account AWS environments with automation, scalability, and resilience. Implement and audit AWS Well-Architected Framework principles. Oversee cloud operations, including monitoring, alerting, and incident response using CloudWatch, CloudTrail, and third-party tools. DevOps / DevSecOps Drive the full lifecycle of DevOps: infrastructure provisioning (IaC), CI/CD automation, and secure deployments. Leverage IaC tools like Terraform, AWS CDK, and CloudFormation. Integrate security practices such as vulnerability scanning, secrets management, and compliance automation (SOC2, ISO 27001, etc.). Governance, Security & Compliance Establish and enforce governance for IAM, security policies, and cloud configurations. Collaborate with InfoSec teams to uphold enterprise-grade security standards. Set up infrastructure health checks, anomaly detection, and regulatory compliance. Cross-Functional Collaboration Partner with engineering, cybersecurity, and product teams to craft efficient CI/CD pipelines. Influence engineering culture toward cloud-first and DevSecOps excellence. Act as a technical escalation point for complex infrastructure challenges. What You Bring & Your Expertise: 14+ years in cloud infrastructure and DevOps/DevSecOps domains; 4–6 years leading distributed technical teams. Expert-level proficiency in AWS (VPC, EC2, Lambda, IAM, S3, RDS, etc.). Proven hands-on experience with Terraform, GitOps practices, CI/CD using Jenkins or GitLab, and container orchestration (EKS/Kubernetes). Strong grasp of DevSecOps principles, secure software pipelines, and cloud cost governance. Preferred Qualifications AWS Professional Certifications (Solutions Architect, DevOps Engineer, or Security Specialty). Experience with compliance: SOC 2, PCI DSS, HIPAA, and ISO 27001. Exposure to hybrid/multi-cloud environments (Azure/GCP). Familiarity with SRE frameworks (SLI/SLO/SLA tracking). Domain experience in gaming or BFSI is a strong plus. Why Head Digital Works? At Head Digital Works, innovation meets ownership. Our engineering culture thrives on autonomy, trust, and transparency. You’ll engage with cutting-edge technologies and contribute to business-critical systems in a collaborative, diverse, and rapidly evolving environment. Expect openness, ideas-driven teams, and leadership that values your voice. What we offer— Industry-Leading Compensation Comprehensive Mediclaim Coverage Accelerated Career Growth Excellence-Driven Recognition Programs Inclusive & Collaborative Work Culture
Posted 3 days ago
18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Opentext - The Information Company OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact We are looking for an execution-focused Director of Product Management to lead our strategy and roadmap spanning multi-tenant cloud and off-cloud solutions. The ideal candidate has deep product knowledge in automation technologies including RPA, workflow orchestration, low-code and AI driven process. What The Role Offers The Director of Product Management will be responsible for defining and evolving the product vision and roadmap for our process automation and low code solutions to align with business goals. This includes a mature on-premises as well as a SaaS based solution. You will identify opportunities to leverage machine learning and low code/no code capabilities to enhance automation outcomes. You will lead the strategy for a team of product managers spanning multiple solutions. You will drive a high-performing environment that thrives on innovation working closely with engineering, UX, Sales and Solutions Consultants to deliver a scalable, highly performant solution. You will engage with Sales and customers to understand pain points and develop solutions to address them through automation. You will understand industry trends and conduct regular competitive analysis to deliver best in class solutions. You will define and track KPIs such as customer adoption, win/loss analysis to inform priorities and product improvement. What You Need To Succeed 18+ years in software product management with at least three years in a leadership role. Strong understanding of automation technologies. Excellent communication skills with the ability to present to all levels of management. Experience with Agile methodologies. Bachelor’s Degree in Computer Science, Engineering or Business. OpenText is an equal opportunity employer that hires and attracts talent regardless of race, religious creed, color, national origin, ancestry, physical disability, mental disability, medical condition, marital status, sex, age, veteran status, or sexual orientation. At OpenText we acknowledge, value and respect diversity. We draw on diversity of thought and experience to reflect the rich array of cultures representing our broad global customer base OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.
Posted 3 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Junior Full Stack Engineer Job Description We are looking for a Junior Full Stack Engineer with a strong interest in AI-driven technologies to join our dynamic team. In this role, you will design, develop, and maintain AI Agents, AI Widgets integrating large language models (LLMs) into both frontend and backend systems. You will collaborate with product managers, designers, data scientists, and fellow engineers to deliver innovative, high-quality solutions that leverage the latest advances in artificial intelligence. Key Responsibilities Full Stack Development: Design, develop, and maintain robust web widgets, ensuring seamless integration between frontend and backend components. AI & LLM Integration: Integrate large language models and AI agents into applications to enhance functionality, automation, and user experience. API Development: Build and maintain efficient RESTful APIs for AI-powered features and services. Database Management: Design and manage both relational and NoSQL databases to support AI-driven data workflows. Collaboration: Work closely with frontend developers, data scientists, and other stakeholders to deliver innovative, AI-centric solutions. Continuous Improvement: Stay updated with emerging AI technologies, frameworks, and industry trends, and drive the adoption of best practices. Technical Skills 2+ years of experience in full stack web development Proficiency in frontend frameworks such as React JS and Vanilla HTML, CSS & JS Strong backend development experience with Node.js Hands-on experience integrating large language models (e.g., GPT-3, BERT) or similar AI technologies Experience with RESTful APIs, and microservices architecture Solid understanding of relational and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB) Familiarity with cloud platforms (AWS, Azure, or GCP) and DevOps practices Experience with version control systems (e.g., Git) and CI/CD pipelines Soft Skills Excellent problem-solving and analytical skills Strong communication and collaboration abilities Ability to work independently and manage multiple priorities in a fast-paced environment Preferred Qualifications Experience with containerization technologies like Docker and Kubernetes Knowledge of security best practices for web and AI applications Familiarity with automated testing frameworks and tools Understanding of prompt engineering and multi-agent orchestration
Posted 3 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview- We are looking for an experienced AI Engineer to design, build, and optimize AI-powered applications, leveraging both traditional machine learning and large language models (LLMs). The ideal candidate will have a strong foundation in LLM fine-tuning, inference optimization, backend development, and MLOps, with the ability to deploy scalable AI systems in production environments. ShyftLabs is a leading data and AI company, helping enterprises unlock value through AI-driven products and solutions. We specialize in data platforms, machine learning models, and AI-powered automation, offering consulting, prototyping, solution delivery, and platform scaling. Our Fortune 500 clients rely on us to transform their data into actionable insights. Key Responsibilities: Design and implement traditional ML and LLM-based systems and applications Optimize model inference for performance and cost-efficiency Fine-tune foundation models using methods like LoRA, QLoRA, and adapter layers Develop and apply prompt engineering strategies including few-shot learning, chain-of-thought, and RAG Build robust backend infrastructure to support AI-driven applications Implement and manage MLOps pipelines for full AI lifecycle management Design systems for continuous monitoring and evaluation of ML and LLM models Create automated testing frameworks to ensure model quality and performance Basic Qualifications: Bachelor’s degree in Computer Science, AI, Data Science, or a related field 4+ years of experience in AI/ML engineering, software development, or data-driven solutions LLM Expertise Experience with parameter-efficient fine-tuning (LoRA, QLoRA, adapter layers) Understanding of inference optimization techniques: quantization, pruning, caching, and serving Skilled in prompt engineering and design, including RAG techniques Familiarity with AI evaluation frameworks and metrics Experience designing automated evaluation and continuous monitoring systems Backend Engineering Strong proficiency in Python and frameworks like FastAPI or Flask Experience building RESTful APIs and real-time systems Knowledge of vector databases and traditional databases Hands-on experience with cloud platforms (AWS, GCP, Azure) focusing on ML services MLOps & Infrastructure Familiarity with model serving tools (vLLM, SGLang, TensorRT) Experience with Docker and Kubernetes for deploying ML workloads Ability to build monitoring systems for performance tracking and alerting Experience building evaluation systems using custom metrics and benchmarks Proficient in CI/CD and automated deployment pipelines Experience with orchestration tools like Airflow Hands-on experience with LLM frameworks (Transformers, LangChain, LlamaIndex) Familiarity with LLM-specific monitoring tools and general ML monitoring systems Experience with distributed training and inference on multi-GPU environments Knowledge of model compression techniques like distillation and quantization Experience deploying models for high-throughput, low-latency production use Research background or strong awareness of the latest developments in LLMs Tools & Technologies We Use Frameworks: PyTorch, TensorFlow, Hugging Face Transformers Serving: vLLM, TensorRT-LLM, SGlang, OpenAI API Infrastructure: Docker, Kubernetes, AWS, GCP Databases: PostgreSQL, Redis, Vector Databases We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Posted 3 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance Contribute to development, maintenance, testing strategy, design discussions, and operations of the team Participate in all aspects of agile software development including design, implementation, and deployment Responsible for the end-to-end lifecycle of new product features / components Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design Work with a small, cross-functional team on products and features to drive growth Learning new tools, languages, workflows, and philosophies to grow Research and suggest new technologies for boosting the product Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more Qualifications Excellent team player with strong communication skills B.Sc. in Computer Sciences or similar 3-5 years of experience in Data Pipeline development 3-5 years of experience in PySpark / Databricks 3-5 years of experience in Python / Airflow Knowledge of OOP and design patterns Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking) Fluent with large scale SQL databases Good problem-solving and analysis abilities Requirements - Advantage Experience with Azure cloud services Experience with Agile Development methodologies Experience with Git Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 3 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world's toughest diseases, and make people's lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what fs known today. About The Role Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect From You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 4 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores. Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good-to-Have Skills: Experience with data modeling, performance tuning on relational and graph databases ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting As an Associate Data Engineer at Amgen, you will be involved in the development and maintenance of data infrastructure and solutions. You will collaborate with a team of data engineers to design and implement data pipelines, perform data analysis, and ensure data quality. Your strong technical skills, problem-solving abilities, and attention to detail will contribute to the effective management and utilization of data for insights and decision-making.
Posted 3 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
Remote
Role Description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures Of Outcomes 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Resolution Outputs Expected: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation Install and configure tools software and patches Runbook/KB Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation Coordinate and monitor IT process implementation within the function Compliance Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments Role - Cloud Engineer Primary Responsibilities Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas Design and implement automation for self-service adoption, access provisioning, and compliance monitoring Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows Participate in Agile sprints, sprint planning, and cross-team technical initiatives Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support GitHub secrets scanning and remediation with integration to HashiCorp Vault Lifecycle management of developer access across tools like GitHub and Teleport Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience Proficiency with Terraform (IaC) and Ansible Strong scripting experience in Python, PowerShell, or Bash Experience operating in cloud environments (AWS, Azure, or GCP) Familiarity with secure development practices and DevSecOps tooling Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes Strong cross-functional communication skills with technical and non-technical stakeholders Ability to work independently while knowing when to escalate or align with other engineers or teams. Comfort managing complexity and ambiguity in a fast-paced environment Ability to balance short-term support needs with longer-term infrastructure automation and optimization. Proactive, service-oriented mindset focused on enabling secure and scalable development Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience :5-8 yrs Responsible for building agentic workflows using modern LLM orchestration framework to automate and optimize xomplex business process in the Travel domain. Individual contributor (IC), owning end to end development of intelligent agents and services that power customer experiences, recommendations and backend automation. Design and implement agentic and autonomous workflows using framework such as LangGraph, LangChain and CrewAI. Translate business problems in the Travel domain into intelligent LLM powered workflows. Own at least two AI use case implementation from design to production deployment. Build and expose RESTFul and GraphQL APIs to support internal and external consumers. Develop and maintain robust Python based microservices using FastAPI or Django. Collaborate with product managers, data engineers and backend teams to design seamless AI driven user experience. Deploy and maintain workflow and APIs on AWS with best practices in scalability and security. Nice to have Experience in a Big Data technologies (Hadoop, Terradata, Snowflake, Spark, Redshift, Kafka, etc.) for Data Processing. Experience with data management process on AWS is a huge Plus. AWS certification Hand on experience building applications with LangGraph, LangChain and CrewAI. Experience working with AWS services - Lambda, API Gateway, S3, ECS, DynamoDB Experience and Proven track record of implementing at least two AI / LLM based use cases in production. Strong problem solving skills with the ability to deconstract complex problems into actionable AI workflows Experience building scalable, production-grade APIs using FastAPI or Django Strong command over Python and software engineering best practices Solid understanding of multithreading, IO operations, and scalability patterns in backend systems
Posted 3 days ago
9.0 - 15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category User Experience Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About The Role Salesforce is looking for User Experience Researchers to join our Product Research and Insights team in Hyderabad & Bangalore, India . The ideal candidates will have extensive experience in carrying out generative/foundational research for enterprise SaaS and/or technology products. Expertise in Qualitative research will be key to succeed in this role. Candidates from B2B, Enterprise SAAS & IT industries are highly desired. About The Team Automation and Integration Cloud - In this role you will work on our core products which include Flow Builder, Flow Orchestration, RPA and the entire suite of MuleSoft products. Engineering background is desired for this cloud. See the product and take a free demo here. Desired experience Lead Researcher: 9 - 15 years Grade/Level offered will depend upon the performance in the interviews. About You You love working in a fast-paced, ever dynamic environment. You are enthused about leading product direction with your research insights and find cool new avenues to engage with the product team to infuse user-centric insights into product planning. You distill sophisticated problems into insights that inform design, development, and business decisions. You’re passionate about technology. You’re even more passionate about technology users and buyers. You have deep empathy for their everyday struggles and challenges. You always put their needs first, and you’re unwavering in your desire to provide the best experiences for users. Responsibilities Scope and drive research projects that inform product strategy, design, and development, in collaboration with our cross-functional partners across the Automation and Integration space. Create relationships with stakeholders and demonstrate the skill to identify gaps in product thinking to recommend appropriate research. Conduct generative and evaluative research using a mixture of large-scale research methods (surveys, un-moderated testing, behavioral data analysis, etc.) and small-scale research methods (interviews, moderated concept testing, etc.) Synthesize research findings into insights and recommendations and work with collaborators to socialize these findings Partner fully with product owners, designers, engineers, competitive intelligence, and other researchers to provide the best possible experience for our users and customers Create narratives to frame problems and highlight the business value of potential solutions Be a strategic business partner to key executives, helping shape their long-term vision Work on fast-paced projects, requiring attention to detail and working within constrained timelines Willing and able to work across globally distributed teams Required Experience / Skills For Senior Researcher, minimum 6 years of full time work experience conducting research in user experience, product design, or technology contexts for IT product based companies. For Lead Researcher, minimum 10 years of full time work experience conducting research in user experience, product design, or technology contexts for IT product based companies. Proven track record influencing user experience and/or product direction and strategy with actionable insights Ability to plan, design, complete and communicate both strategic and tactical research engagements Ability to structure and lead internal and external workshops or design studios and analyze the outcomes to provide insight for partners Expert understanding of research methods (qualitative and quantitative) and standard processes Experience working in cross-functional teams (e.g. product management, design, engineering) Comfortable with basic statistical methods and concepts, and experience working with behavioral signals data Preferred But Not Required Previous research experience in enterprise iPaaS and/or automation technologies and services People management experience Experience leading research independently for entire products rather than features Flexible to work with global teams across varied timezones Unleash Your Potential When you join Salesforce, you’ll be limitless in all areas of your life. Our benefits and resources support you to find balance and be your best , and our AI agents accelerate your impact so you can do your best . Together, we’ll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future — but to redefine what’s possible — for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough