Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Chennai-SeniorProductM Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-IN-Mumbai-ProductManager Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Dholera, Gujarat, India
On-site
Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Responsibilities: Lead the Yield Management System team for a 300mm Wafer Fab. Partner with Digital/IT team for design and development of the Yield Management software and Database Design and create a Data Integration framework that collects data from different sources. (E-test, Defect Inspection, Inline Metrology, Sort) Yield Analysis Tools: Develop algorithms for data analysis to enable root cause understanding and yield optimization Partner with PI/YE, CFM teams and vendors to enable continuous system improvements Automation & Reporting: Generate Automated yield reports. Cross-Functional Collaboration: Work with Product Engineering and Process Integration teams to get requirements to build the software to support yield management. Ensure reliability and scalability of the software for high volume data. Present updates to internal and customer senior executives. Travel as required. Essential Attributes: Self-driven, independent, and results oriented. Strong cross-functional collaboration skills across global teams. Continuous learning mindset Curious, data-driven, and resilient problem-solver. Open, humble, and relationship-focused communicator. Creative and agile in exploring new ideas and adapting to change. Qualifications: Minimum Bachelor’s degree in electrical engineering, computer science or equivalent; Advanced degree preferred Experience in data analysis, failure/defect analysis and yield improvement Strong understanding of device physics, process integration, yield improvement, failure mechanisms Familiarity with statistical techniques and data visualization techniques Familiarity with operations dashboards Familiarity with process modules and metrology/defect inspection tools Familiarity with AI/ML basics Innovation mindset Desired Experience: 10+ years’ experience in the semiconductor industry with specific experience in E-test / Wafer Sort Proven structured problem-solving skills using 8D and other methods Has programmed and managed E-test systems Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Rawatsar, Rajasthan, India
Remote
AI is transforming the way businesses operate, yet most AI-powered products fail to deliver real, measurable impact. Companies struggle to bridge the gap between cutting-edge models and practical applications, leading to AI features that are difficult to use, expensive to run, and misaligned with real business needs. Despite rapid advancements, most AI products still suffer from poor adoption, high inference costs, and limited integration into existing workflows. At IgniteTech, we are solving this problem by focusing on AI that delivers tangible improvements in customer engagement, retention, and efficiency. We don't just build prototypes; we bring AI-powered products to market, integrating them directly into high-value workflows. Our approach prioritizes business outcomes over research experiments, ensuring that every AI-driven feature is optimized for usability, performance, and long-term sustainability. This is an opportunity to work on AI that isactively reshaping how businesses operate. This role is not a high-level strategy position focused on product roadmaps without execution. It is a hands-on product management role where you will define, build, and ship AI-powered features that customers actually use. You will work closely with ML engineers to translate business needs into technical requirements, making decisions about model performance, trade-offs between accuracy and speed, and the real-world costs of AI inference. The ideal candidate understands both the business impact of AI and the technical challenges of deploying it at scale. If your experience is limited to general AI awareness without direct involvement in shipping AI-powered products, this role is not the right fit. If you thrive on solving hard problems at the intersection of AI, product, and business, and you're eager to bring AI to market in a way that truly matters, then we want to hear from you! What You Will Be Doing Identifying specific applications of GenAI technology within IgniteTech's product range Creating detailed roadmaps for each product and creating POCs that simulate the AI vision for the new features Rolling out AI-driven functionalities, addressing any blockers to customer adoption, and ensuring smooth integration into the product suite What You Won’t Be Doing Anything related to software engineering or technical support Senior Product Manager Key Responsibilities Designing high-quality, customer-centric AI solutions that enhance product adoption, engagement, and retention Basic Requirements 3+ years of product management experience in the B2B software industry Professional experience using generative AI tools, such as ChatGPT, Claude, or Gemini, to automate repetitive tasks About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $100 USD/hour, which equates to $200,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5438-LK-COUNTRY-SeniorProductM Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Dholera, Gujarat, India
On-site
Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Responsibilities: Partner with Digital/IT team for design and development of the Yield Management software and Database Design and create a Data Integration framework that collects data from different sources. (E-test, Defect Inspection, Inline Metrology, Sort) Yield Analysis Tools: Develop algorithms for data analysis to enable root cause understanding and yield optimization Partner with PI/YE, CFM teams and vendors to enable continuous system improvements Automation & Reporting: Generate Automated yield reports. Cross-Functional Collaboration: Work with Product Engineering and Process Integration teams to get requirements to build the software to support yield management. System Maintenance & Scalability: Reliability and scalability of the software for high volume data. Present updates to internal and customer senior executives. Travel as required. Essential Attributes: Self-driven, independent, and results oriented. Strong cross-functional collaboration skills across global teams. Continuous learning mindset Curious, data-driven, and resilient problem-solver. Open, humble, and relationship-focused communicator. Creative and agile in exploring new ideas and adapting to change. Qualifications: Minimum Bachelor’s degree in electrical engineering, computer science or equivalent; Advanced degree preferred Experience in data analysis, failure/defect analysis and yield improvement Strong understanding of device physics, process integration, yield improvement, failure mechanisms Can write basic code (eg. Python, C++) Familiarity with statistical techniques and data visualization techniques Familiarity with operations dashboards Familiarity with process modules and metrology/defect inspection tools Familiarity with AI/ML basics Innovation mindset Desired Experience Level: 5+ years’ experience in the semiconductor industry with specific experience in E-test / Wafer Sort Proven structured problem-solving skills using 8D and other methods Has programmed and managed E-test systems Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Delhi, India
On-site
Ways of working - Mandate 3 -Office/Field : Employees will work full time from their office base location About Swiggy Swiggy is India’s leading on-demand delivery platform with a tech-first approach to logistics and a solution-first approach to consumer demands. With a presence in 500+ cities across India, partnerships with hundreds of thousands of restaurants, an employee base of over 5000, a 2 lakh+ strong independent fleet of Delivery Executives, we deliver unparalleled convenience driven by continuous innovation. Built on the back of robust ML technology and fueled by terabytes of data processed every day, Swiggy offers a fast, seamless and reliable delivery experience for millions of customers across India. From starting out as a hyperlocal food delivery service in 2014, to becoming India’s leading on-demand convenience platform today, our capabilities result not only in lightning-fast delivery for customers, but also in a productive and fulfilling experience for our employees. Job Description We are looking for a sales manager in each of the above cities to manage our corporate or B2B sales function. The person’s primary responsibility would be to build a network funnel of corporate clients relationships, convert them to clients who adopt Swiggy corporate offerings, manage the relationship and sales funnel, and drive continuous engagement. The person should also be adept at identifying gaps basis the needs of the client and provide feedback to the product team on areas to build. Key Responsibilities Identify and build a network of corporate clients. Continuously expand while farming existing relationships Operate a sales beat that would help maximize engagement with clients, convert them and also farm existing client relationships Achieve monthly sales goals for the region and build the pipeline for subsequent periods Manage the sales administration function, operational performance reporting, streamlining processes and systems wherever possible, and advising senior management on maximizing business relationships and creating an environment where customer service can flourish Build a local marketing activation plan in locations as needed and execute focusing on results Plan and execute for quarterly operating plan incorporating local nuances Participate in conferences and roadshows targeting Admin/EA clientele and present Swiggy offerings in those forums The individual may need to collaborate and work effectively with other sales managers of the region to effectively operate and execute. Key Competencies or values the person has to role model: Relationship building and management with corporate clientele High agency and ownership ,Should be a self starter and should role model a high bar on ownership Ability to communicate well and effective with corporate clients Move fast, Break barriers and deliver Grit and resilience: ability to bounce back and persist with corporate clients Does not accept No as an answer unless fully convinced himself/herself and finds alternate solves Good understanding of P&L and ability to deliver the same Desired Skill Graduates with 4+ years of experience, having handled corporate clients in the past roles Strong communication skills and ability to manage relationships Attitude and aptitude for sales Should be a team player working with peers to deliver great experience "We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regards to race, color, religion, sex, disability status, or any other characteristic protected by the law" Show more Show less
Posted 2 days ago
6.0 - 10.0 years
20 - 30 Lacs
Hyderabad
Hybrid
Job Description We are seeking a Senior Software Engineer to lead and guide our engineering team in building and enhancing our internal AI Gateway platform. This platform empowers internal organizations to create and deploy AI use cases, integrating with Azure OpenAI, AWS Bedrock, Databricks, and other cloud platforms. You will drive architecture, design, and implementation of scalable data/model management pipelines, agentic AI, RAGs, tracing, and MCP server components. Essential to have hands on Development experience in Python JavaScript, TypeScript, Node JS Essential to have hands on experience in building scalable GenAI applications leveraging LLMs e.g. GPT, Claude, Anthropic using various techniques e.g. RAG, Agentic AI etc. Preferred experience in Developing GenAI Solutions using frameworks e.g. Langchain, LlamaIndex. Preferred experience in building AI pipelines for interaction via Chatbots, User Interface, Autonomous Agents. Must have excellent communication skills to understand complex AI concepts & work with architects to build the solution as well as the ability to explain it to Nontechnical Stakeholders. Preferred experience of deploying solutions on AWS with best, practices. Good to have built capabilities in GenAI backed by evaluation (any existing framework for evaluation or custom built for the specific use case).Contributions to AI research, opensource models or AI hackathons are a plus. Responsib0ilities: Lead the design and development of the AI Gateway platform and its integrations (Azure OpenAI, AWS Bedrock, Databricks, etc.). Architect and implement scalable, secure, and maintainable data/model management pipelines. Guide the team in building agentic AI, RAG (Retrieval-Augmented Generation), tracing, and MCP server solutions. Mentor and upskill team members, conduct code reviews, and enforce best practices. Collaborate with product, DevOps, and data science teams to deliver robust solutions. Drive automation and CI/CD for model and data pipeline deployments. Ensure platform reliability, observability, and security. Requirements: 7+ years of software engineering experience, with at least 2 years in a technical leadership role. Strong Python (FastAPI, asyncio), cloud (Azure, AWS), and data engineering skills. Experience with LLMs, agentic AI, RAGs, and orchestration frameworks. Hands-on with cloud ML services (Azure OpenAI, AWS Bedrock, Databricks). Familiarity with CI/CD, Docker, Kubernetes, and infrastructure-as-code.
Posted 2 days ago
17.0 - 20.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Developer Experience is a growing department within the Global Technology division of Bank of America. We drive modernization of technology tools and processes and Operational Excellence work across Global Technology. The organization operates in a very dynamic and fast-paced global business environment. As such, we value versatility, creativity, and innovation provided through individual contributors and teams that come from diverse backgrounds and experiences. We believe in an Agile SDLC environment with a strong focus on technical excellence and continuous process improvement. Job Description* We are seeking a strategic and hands-on Principal Engineer to drive the design, modernization, and delivery of secure enterprise-grade applications at scale. In this role, you will shape architectural decision, introduce modern engineering practices, and influence platform and product teams to build secure, scalable, and observable systems. This is a high-impact technical leadership role for a proven engineer passionate about cloud-native architecture, developer experience, and responsible innovation. Responsibilities* Lead architecture, design and development of modern, distributed applications using modern tech stack, frameworks, and cloud-native patterns. Provide hands-on leadership in designing system components, APIs, and integration patterns, ensuring high performance, security, and maintainability. Define and enforce architectural standards, reusable patterns, coding practices and technical governance across engineering teams. Guide the modernization of legacy systems into modern architectures, optimizing for resilience, observability, and scalability. Integrate secure-by-design principles across SDLC through threat modeling, DevSecOps practices, and zero-trust design. Drive engineering effectiveness by enhancing observability, developer metrics and promoting runtime resiliency. Champion the responsible adoption of Generative AI tools to improve development productivity, code quality and automation. Collaborate with product owners, platform teams, and stakeholders to align application design with business goals. Champion DevSecOps, API-first design, and test automation to ensure high-quality and secure software delivery. Evaluate and introduce new tools, frameworks, and design patterns that improve engineering efficiency and platform consistency. Mentor and guide engineers through design reviews, performance tuning and technical deep dives. Requirements* Education* Graduation / Post Graduation : BE/B.Tech/MCA Certifications If Any: NA Experience Range* 17 to 20 Years Foundational Skills* Proven expertise in architecting large-scale distributed system with a strong focus on Java-based cloud-native applications using Spring Boot, Spring Cloud and API-first design; experience defining reference architectures, reusable patterns, and modernization blueprints. Deep hands-on experience with container orchestration platforms like Kubernetes/OpenShift including service mesh, autoscaling, observability and cost-aware architecture. In-depth knowledge of relational and NoSQL data platforms (e.g.: Oracle, PostgreSQL, MongoDB, Redis) including data modeling for microservices, transaction patterns, distributed consistency, caching strategies, and query performance optimization. Expertise in CI/CD pipelines, GitOps and DevSecOps practices for secure, automated application delivery; strong understanding of API lifecycle, runtime resiliency, and multi-environment release strategies. Strong grasp of threat modeling, secure architecture principles, and zero-trust application design with experience in integrating security throughout the software development lifecycle. Demonstrated experience using GenAI tools (e.g.: GitHub Copilot) to enhance the software development lifecycles – prompt engineering for code generation, automated test creation, refactoring, and architectural validation – with a responsible use, prompt design and maximizing engineering efficiency. Desired Skills* Experience modernizing legacy applications to modern cloud native architectures [e.g.: Microservices, Event-Driven etc.] Experience with big data platforms or architectures supporting real-time or large-scale transactional systems would be a big plus. Exposure to AI/ML workflows, including integration with ML APIs, and orchestration of AI-powered features. Demonstrated ability to explore emerging technologies like platform engineering, internal developer tooling and AI-augmented architecture. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Mumbai, Chennai, Hyderabad Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE_Risk Data Engineer/Leads Description – External Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 3-6 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Ingestion and provisioning of raw datasets, enriched tables, and/or curated, re-usable data assets to enable variety of use cases. Driving improvements in the reliability and frequency of data ingestion including increasing real-time coverage Support and enhancement of data ingestion infrastructure and pipelines. Designing and implementing data pipelines that will collect data from disparate sources across enterprise, and from external sources and deliver it to our data platform. Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulation data throughout our data flows, ensuring data is available at each stage in the data flow, and in the form needed for each system, service and customer along said data flow. Identifying and onboarding data sources using existing schemas and where required, conduction exploratory data analysis to investigate and provide solutions. Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must Have Skills. 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (DB: PLSQL) At least 4+ years of experience in Database Design and Dimension modelling using Oracle PLSQL. Should be experience of working PLSQL advanced concepts like ( Materialized views, Global temporary tables, Partitions, PLSQL Packages) Experience in SQL tuning, Tuning of PLSQL solutions, Physical optimization of databases. Experience in writing and tuning SQL scripts including- tables, views, indexes and Complex PLSQL objects including procedures, functions, triggers and packages in Oracle Database 11g or higher. Experience in developing ETL processes – ETL control tables, error logging, auditing, data quality etc. Should be able to implement reusability, parameterization workflow design etc. Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J) Strong analytical and critical thinking skills, with ability to identify and resolve issues in data pipelines and systems. Strong understanding of ETL methodologies and best practices. Collaborate with cross-functional teams to ensure successful implementation of solutions. Experience with OLAP, OLTP databases, and data structuring/modelling with understanding of key data points. Good to have: Experience of working in Financial Crime, Financial Risk and Compliance technology transformation domains. Certification on any cloud tech stack. Experience building and optimizing data pipelines on AWS glue or Oracle cloud. Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence and data ingestion pipelines for AI/ML use cases. Experience with data visualization (Power BI/Tableau) and SSRS. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
JOB_POSTING-3-71493-1 Job Description Role Title : AVP, Enterprise Logging & Observability (L11) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Splunk is Synchrony's enterprise logging solution. Splunk searches and indexes log files and helps derive insights from the data. The primary goal is, to ingests massive datasets from disparate sources and employs advanced analytics to automate operations and improve data analysis. It also offers predictive analytics and unified monitoring for applications, services and infrastructure. There are many applications that are forwarding data to the Splunk logging solution. Splunk team including Engineering, Development, Operations, Onboarding, Monitoring maintain Splunk and provide solutions to teams across Synchrony. Role Summary/Purpose The role AVP, Enterprise Logging & Observability is a key leadership role responsible for driving the strategic vision, roadmap, and development of the organization’s centralized logging and observability platform. This role supports multiple enterprise initiatives including applications, security monitoring, compliance reporting, operational insights, and platform health tracking. This role lead platform development using Agile methodology, manage stakeholder priorities, ensure logging standards across applications and infrastructure, and support security initiatives. This position bridges the gap between technology teams, applications, platforms, cloud, cybersecurity, infrastructure, DevOps, Governance audit, risk teams and business partners, owning and evolving the logging ecosystem to support real-time insights, compliance monitoring, and operational excellence. Key Responsibilities Splunk Development & Platform Management Lead and coordinate development activities, ingestion pipeline enhancements, onboarding frameworks, and alerting solutions. Collaborate with engineering, operations, and Splunk admins to ensure scalability, performance, and reliability of the platform. Establish governance controls for source naming, indexing strategies, retention, access controls, and audit readiness. Splunk ITSI Implementation & Management - Develop and configure ITSI services, entities, and correlation searches. Implement notable events aggregation policies and automate response actions. Fine-tune ITSI performance by optimizing data models, summary indexing, and saved searches. Help identify patterns and anomalies in logs and metrics. Develop ML models for anomaly detection, capacity planning, and predictive analytics. Utilize Splunk MLTK to build and train models for IT operations monitoring. Security & Compliance Enablement Partner with InfoSec, Risk, and Compliance to align logging practices with regulations (e.g., PCI-DSS, GDPR, RBI). Enable visibility for encryption events, access anomalies, secrets management, and audit trails. Support security control mapping and automation through observability. Stakeholder Engagement Act as a strategic advisor and point of contact for business units, application, infrastructure, security stakeholders and business teams leveraging Splunk. Conduct stakeholder workshops, backlog grooming, and sprint reviews to ensure alignment. Maintain clear and timely communications across all levels of the organization. Process & Governance Drive logging and observability governance standards, including naming conventions, access controls, and data retention policies. Lead initiatives for process improvement in log ingestion, normalization, and compliance readiness. Ensure alignment with enterprise architecture and data classification models. Lead improvements in logging onboarding lifecycle time, automation pipelines, and selfservice ingestion tools. Mentor junior team members and guide engineering teams on secure, standardized logging practices. Required Skills/Knowledge Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Splunk Subject Matter Expert (SME) Strong hands-on understanding of Splunk architecture, pipelines, dashboards, and alerting, data ingestion, search optimization, and enterprise-scale operations. Experience supporting security use cases, encryption visibility, secrets management, and compliance logging. Splunk Development & Platform Management, Security & Compliance Enablement, Stakeholder Engagement & Process & Governance Experience with Splunk Premium Apps - ITSI and Enterprise Security (ES) minimally Experience with Data Streaming Platforms & tools like Cribl, Splunk Edge Processor. Proven ability to work in Agile environments using tools such as JIRA or JIRA Align. Strong communication, leadership, and stakeholder management skills. Familiarity with security, risk, and compliance standards relevant to BFSI. Proven experience leading product development teams and managing cross-functional initiatives using Agile methods. Strong knowledge and hands-on experience with Splunk Enterprise/Splunk Cloud. Design and implement Splunk ITSI solutions for proactive monitoring and service health tracking. Develop KPIs, Services, Glass Tables, Entities, Deep Dives, and Notable Events to improve service reliability for users across the firm Develop scripts (python, JavaScript, etc.) as needed in support of data collection or integration Develop new applications leveraging Splunk’s analytic and Machine Learning tools to maximize performance, availability and security improving business insight and operations. Support senior engineers in analyzing system issues and performing root cause analysis (RCA). Desired Skills/Knowledge Deep knowledge of Splunk development, data ingestion, search optimization, alerting, dashboarding, and enterprise-scale operations. Exposure to SIEM integration, security orchestration, or SOAR platforms. Knowledge of cloud-native observability (e.g. AWS/GCP/Azure logging). Experience in BFSI or regulated industries with high-volume data handling. Familiarity with CI/CD pipelines, DevSecOps integration, and cloud-native logging. Working knowledge of scripting or automation (e.g., Python, Terraform, Ansible) for observability tooling. Splunk certifications (Power User, Admin, Architect, or equivalent) will be an advantage . Awareness of data classification, retention, and masking/anonymization strategies. Awareness of integration between Splunk and ITSM or incident management tools (e.g., ServiceNow, PagerDuty) Experience with Version Control tools – Git, Bitbucket Eligibility Criteria Bachelor's degree with Minimum of 6+ years of experience in Technology ,or in lieu of a degree 8+ years of Experience in Technology Minimum of 3+ years of experience in leading development team or equivalent role in observability, logging, or security platforms. Demonstrated success in managing large-scale logging platforms in regulated environments. Excellent communication, leadership, and cross-functional collaboration skills. Experience with scripting languages such as Python, Bash, or PowerShell for automation and integration purposes. Prior experience in large-scale, security-driven logging or observability platform development. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and interpersonal skills to interact effectively with team members and stakeholders. Knowledge of IT Service Management (ITSM) and monitoring tools. Knowledge of other data analytics tools or platforms is a plus. WORK TIMINGS : 01:00 PM to 10:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) L9+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L09+ Employees can apply. Level / Grade : 11 Job Family Group Information Technology Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: Senior Engineer, AVP Location: Pune, India Role Description We are seeking a Data Security Engineer to design, implement and manage security measures that protect sensitive data across our organization. This role focusses on the execution and delivery of Data Security solutions, focusing on configuration, engineering, and integration within a complex enterprise environment. While the role operates within Cybersecurity the person will collaborate with IT, Risk Management, and Business Units on a case-by-case basis, delving Data Loss prevention solutions. The ideal candidate understands and manages the existing tool stack within a complex environment, navigates through technical integration challenges and supports the transition from legacy solutions to new solutions within the pillar and across different areas of the bank. This role will work on specific tools like Symantec DLP, Zscaler but require the flexibility to evaluate and integrate new solutions like PaloAlto, Fortinet, Microsoft Purview and capabilities in existing cloud security solutions like Azure/GCP. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Policy Development and Implementation: Design and implement data loss prevention policies, standards, and procedures to protect sensitive data from unauthorized access and disclosure. Risk Assessment: Conduct regular assessments of our implementation to identify vulnerabilities and potential threats to the organization's data. Develop strategies to mitigate identified risks. DLP Solutions: Evaluate, deploy, and manage DLP solutions and technologies. Ensure that these tools are effectively integrated and configured to protect sensitive data across the organization. Monitoring and Analysis: Monitor data movement and usage to detect and respond to potential data breaches or policy violations. Analyse incidents to identify root causes and develop corrective actions. Collaboration: Work with IT, legal, and business teams to ensure that DLP measures align with organizational goals and regulatory requirements. Provide guidance and support to stakeholders on data protection issues. Design and Implement data security frameworks, including encryption, tokenization and anonymization techniques within a hybrid environment Implement cloud-native security controls (e.g., CASB, CSPM, DSPM) to protect data in SaaS, IaaS, and PaaS environments. Implement Digital Rights Management, encryption and tokenization strategies and solutions to protect data in hybrid environments and prevent unauthorized access and disclosure. Deploy and manage data discovery & classification tools to identify sensitive data across structured and unstructured sources. Implement automated classification and labeling strategies for compliance and risk reduction. Your Skills And Experience Technical Expertise 5+ years of hands-on experience in Data Security, Information Protection, or Cloud Security. Strong expertise in delivering Data Security platforms (Symantec, Netskope, Zscaler, PaloAlto, Fortinet, etc.). Knowledge of Cloud Service Provisioning and experience with Cloud Security (AWS, Azure, GCP) and SaaS data protection solutions. Experience with Cloud Security (CASB), SaaS Security Posture Management (SSPM), Data Security Posture Management (DSPM). Proficiency in network security, endpoint protection, and identity & access management (IAM). Scripting knowledge (Python, PowerShell, APIs) for security automation are a plus. Hands-on experience with AI/ML and data security related remediations are a plus. Soft Skills & Collaboration Strong problem-solving and analytical skills to assess security threats and data exposure risks. Ability to work cross-functionally with Security, IT, and Risk teams. Effective written and verbal communication skills, especially when documenting security configurations and investigations. Professional certifications such as CISSP, CISM, CCSP, GIAC (GCIH, GCFA), or CEH. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction At IBM, work is more than a job - it's a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you've never thought possible. Are you ready to lead in this new era of technology and solve some of the world's most challenging problems? If so, lets talk. Your Role And Responsibilities As a Software Developer, you will be responsible for designing, developing, and deploying software solutions for application modernization using Generative AI technologies. This role requires a strong technical foundation, expertise in cloud-based AI/ML solutions, and an ability to optimize software performance through modern architectures. Key Responsibilities Develop and deploy modernized applications utilizing Generative AI technologies. Write efficient, scalable, and maintainable code using Python and JavaScript (Node.js, TypeScript); experience with FastAPI is a plus. Leverage IBM Cloud services and AI/ML technologies, particularly watsonx, for intelligent application development. Work with developer tools such as GitHub, VSCode, and CI/CD pipelines to streamline development. Design, optimize, and maintain relational and NoSQL databases like Db2, PostgreSQL, and MongoDB. Develop and manage containerized applications using Docker, Kubernetes, and OpenShift. Implement microservices and serverless architectures for scalable solutions. Solve complex software challenges with strong analytical and problem-solving skills. Collaborate with cross-functional teams for AI-driven application integration and deployment. Required Technical And Professional Expertise 5+ years of software development experience with a strong foundation in coding, debugging, and optimization. Proficiency in Python and JavaScript (Node.js, TypeScript) frameworks; FastAPI experience is advantageous. Expertise in IBM Cloud services and AI/ML technologies, especially watsonx. Hands-on experience with DevOps tools, including GitHub, VSCode, and CI/CD automation. Strong understanding of database technologies such as Db2, PostgreSQL, and MongoDB. Extensive experience in containerization, using Docker, Kubernetes, and OpenShift. Solid knowledge of microservices and serverless architectures. Excellent problem-solving and analytical skills, with a strategic approach to software development. Preferred Technical And Professional Experience Familiarity with FastAPI for building lightweight and efficient applications. Strong knowledge of security best practices for cloud-based AI applications. Experience in high-performance computing and data processing pipelines. Understanding of AI-driven automation and optimization techniques. Ability to troubleshoot and optimize large-scale distributed systems. Show more Show less
Posted 2 days ago
1.0 - 6.0 years
1 - 3 Lacs
Pune
Work from Office
Experience: 1yr- 5yrs Salary: As per company norms Location: Chinchwad, Pune - Maharashtra Joining: Immediate Industry Type: Education / Training Department: Trainer/ Teaching Faculty Work Mode: Work From Office (Full time Job) Salary: As per the company norms Notice Period : Immediate Job Description : We are looking for an experienced and dynamic Data Science Trainer to join our team. The ideal candidate will be responsible for designing and delivering training sessions on business analytics concepts, tools, and applications. You will equip learners with the skills needed to interpret data, derive insights, and make strategic business decisions. Skills: Power BI, Advance Excel, SQL, Machine learning , Artificial Intelligence, Python Responsibilities and Duties - Devise technical training programs according to organizational requirements. Produce training schedules and classroom agenda & execute training sessions Determine course content according to objectives. Prepare training material (presentations, worksheets etc.) Keep and report data on completed courses, absences, issues etc. Observe and evaluate results of training programs. Determine overall effectiveness of programs and make improvements.. (Note: PART TIME WORKER PLEASE DO NOT APPLY) Interested candidates can drop their CV on apardeshi@sevenmentor.com OR contact on 8806178325
Posted 2 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Would you like to work on the team that powers the most popular operating system – Windows – and impact over a billion people globally with your day-to-day work? If yes, come join us! We are the Windows Developer Platform team, and we build the platform that developers use to build the most engaging apps for Windows. We are looking for Principal Software Engineer to join the team for taking the platform forward in their evolution. We want to expand the capabilities of the Windows app platform and need you to help us drive the revolution. It is a unique opportunity to work on both Microsoft technologies and one of the largest customer bases in the world! You will also get an opportunity to collaborate across various teams within Windows group and across product groups within the company and work with some of the best minds in the world! The more diverse our team, the more inclusive our end result. To that end, we encourage applicants from any background and with any perspective. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities We are building a center of excellence for client platform in Windows India organization. The platform enables first-party and third-party developers to build amazing Windows apps. As a Principal Software Engineer, you will be responsible for designing and developing high-quality software components and libraries for Windows Developers. You will be exposing the capabilities via APIs which need to follow consistent patterns, are scalable, extensible and maintainable. You will also play a key role in open-ended explorations, prototyping and identifying opportunities for our developers. You will have the amazing opportunity learn and grow by working closely with the architects, senior engineers, Program Managers, and AI/ML scientists who contribute to the overall technical vision and strategy of the “architectural how” of how we build a scalable architecture with great fundamentals (such as performance, power, reliability). And you may need to interact with our amazing open resource community developers via GitHub. Qualifications Required Qualifications Bachelor's Degree in Computer Science OR related technical field AND 10+ years technical engineering experience with coding in languages including C++ OR C#. Deep technical experience including leading others. Researching (and perhaps building prototypes and beyond) some new ways of doing something. Demonstrates a mastery of communication and data presentation and storytelling skills. Exhibits a growth mindset and humility, while working through high stakes scenarios. Proven experiences as an ally who can further a more open, diverse, and inclusive workplace with a goal of everyone feeling like they belong. Demonstrated hypothesis-driven, problem-solving orientation. Strong technical and analytical skills, and a passion for customers. Strong design, coding, debugging, teamwork, and communication skills. 10 + years of experience shipping commercial software. 5+ years of experience with C++ and/or C# Preferred Qualifications Experience with Windows development tools and technologies, including Visual Studio and the Windows SDK. XAML familiarity is a plus. Win32 application and systems programming experience will be a bonus. Experience working on Open-Source projects in GitHub. Other Requirements Candidates must be able to meet Microsoft, customer and/or government security screening requirements that are required for this role. These requirements include, but are not limited to the following specialized security screenings Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire/transfer and every two years thereafter. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
ORANTS AI is a cutting-edge technology company at the forefront of AI and Big Data innovation. We specialize in developing advanced marketing and management platforms, leveraging data mining, data integration, and artificial intelligence to deliver efficient and impactful solutions for our corporate clients. We're a dynamic, remote-first team committed to fostering a collaborative and flexible work environment. Salary: 40 - 43 LPA + Variable Location: Remote (India) Work Schedule: Flexible Working Hours Join ORANTS AI as a Senior AI Engineer and contribute to the development of our intelligent marketing and management platforms. We're looking for an experienced professional who can design, implement, and deploy advanced AI models and algorithms to solve complex business problems. Responsibilities: Design, develop, and deploy machine learning and deep learning models for various applications (e.g., natural language processing, predictive analytics, recommendation systems). Collaborate with data scientists to translate research prototypes into production-ready solutions. Optimize AI models for performance, scalability, and efficiency. Implement robust data pipelines for training and inference. Stay current with the latest advancements in AI/ML research and technologies. Participate in the entire AI lifecycle, from data collection and preparation to model deployment and monitoring. Requirements: 5+ years of experience as an AI/ML Engineer. Strong proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with various machine learning algorithms and techniques. Solid understanding of data structures, algorithms, and software design principles. Experience with cloud platforms (AWS, Azure, GCP) and MLOps practices. Familiarity with big data technologies (e.g., Spark, Hadoop) is a plus. Excellent problem-solving skills and a strong analytical mindset. Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Summary We’re looking for a talented Solution Architect with a strong foundation in designing and developing large-scale enterprise applications, and a growing interest or experience in modern AI/ML-driven technologies. This role is ideal for someone who is confident in architecture, passionate about emerging trends like AI/ML, and eager to help shape intelligent systems in collaboration with engineering and business teams. Design scalable, secure, and maintainable enterprise application architectures. Translate business needs into clear technical solutions and design patterns. Lead design discussions, code reviews, and solution planning with internal teams. Guide development teams by providing architectural direct on and mentoring. Collaborate with DevOps for smooth deployment and CI/CD implementation. Participate in client meetings and technical solutioning discussions Explore and propose the use of AI/ML capabilities where relevant, especially in areas like intelligent search, automation, and data insights. Must-Have Skills & Qualifications 8–10 years of experience in software development and solution architecture. Hands-on expertise in either Python or C# .Net. Deep understanding of software architecture pa erns—microservices, event-driven, layered designs. Experience with cloud platforms (AWS, Azure, or GCP). Solid knowledge of databases (SQL & NoSQL), APIs, and integration techniques. Exposure to or strong interest in AI/ML technologies—especially those involving intelligent automation or data-driven systems. Good interpersonal and communication skills; experience interfacing with clients. Capability to lead technical teams and ensure delivery quality. Preferred Skills Awareness of LLMs, vector databases (e.g., Pinecone, FAISS), or RAG-based systems is a plus. Familiarity with Docker, Kubernetes, or DevOps workflows. Knowledge of MLOps or experience working alongside data science teams. Certifications in cloud architecture or AI/ML are a bonus. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job brief- Pre-Sales ConsultantWe are looking for a Presales Consultant to join our business and provide presales support to our customers. A Presales consultant plays a key role in building customer confidence, addressing technical concerns, and winning sales opportunities. Presales Specialists support the sales team by providing technical expertise, demonstrations, and solution presentations to potential customers. Their ability to understand customer requirements and propose tailored solutions drives successful presales engagements. You will also work closely with other employees to ensure customer questions and concerns are addressed in a timely manner. Responsibilities Your main responsibilities will include:Being a Champion on Artificial Intelligence, Machine Learning, Large Language Models, Conversational AI, Omni-channel Automation solutionsBe the first point of contact and advisor to the customer on all matters related to Conversational AI, Agent Assist & AnalyticsDeveloping solutions for AI & ML based Omni- Channel AutomationResponsible for organizing, planning, creating & delivering compelling proof of concept demonstrationsEnsuring solutions stated in the Statement of Work are best practice and in line with client requirementsManaging the sales bid process by responding to RFI’s & RFPWorking closely with Sales, Engineering, Delivery and project teams to ensure the successful closure of the sales processLiaising with Product Managers to provide feedback from clients about product requirements, roadmaps, etcKeeping abreast of market trends, product & competitor landscapes Requirements and skills Proven work experience as a Pre-Sales consultant or similar role for a minimum of three yearsWorking in a SAAS organization is a must.Experience in dealing with large and mid size Enterprise customers in India.You possess a Degree in Computer Science, Engineering, or a related field.You possess strong problem-solving and prioritization skills.You have strong presentation skills.You have excellent interpersonal and communication skills and are adept at working with multiple stakeholdersLLMs, AI, ML, SAAS, Contact centre Technical skills & Clear grasp of digital technology stacks - API's, middleware, ecosystems and the ability to envision solutions is a must.Experience in working with a diverse group- Developers, Program Managers, Sales, Pre-sales is required.This role requires fair amount of travel meeting your clients and working from the office location. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Description: The AI/ML engineer role requires a blend of expertise in machine learning operations (MLOps), ML Engineering, Data Science, Large Language Models (LLMs), and software engineering principles. Skills you'll need to bring: Experience building production-quality ML and AI systems. Experience in MLOps and real-time ML and LLM model deployment and evaluation. Experience with RAG frameworks and Agentic workflows valuable. Proven experience deploying and monitoring large language models (e.g., Llama, Mistral, etc.). Improve evaluation accuracy and relevancy using creative, cutting-edge techniques from both industry and new research Solid understanding of real-time data processing and monitoring tools for model drift and data validation. Knowledge of observability best practices specific to LLM outputs, including semantic similarity, compliance, and output quality. Strong programming skills in Python and familiarity with API-based model serving. Experience with LLM management and optimization platforms (e.g., LangChain, Hugging Face). Familiarity with data engineering pipelines for real-time input-output logging and analysis. Qualifications: Experience working with common AI-related models, frameworks and toolsets like LLMs, Vector Databases, NLP, prompt engineering and agent architectures. Experience in building AI and ML solutions. Strong software engineering skills for the rapid and accurate development of AI models and systems. Prominent in programming language like Python. Hands-on experience with technologies like Databricks, and Delta Tables. Broad understanding of data engineering (SQL, NoSQL, Big Data), Agile, UX, Cloud, software architecture, and ModelOps/MLOps. Experience in CI/CD and testing, with experience building container-based stand-alone applications using tools like GitHub, Jenkins, Docker and Kubernetes Responsibilities: Participate in research and innovation of data science projects that have impact to our products and customers globally. Apply ML expertise to train models, validates the accuracy of the models, and deploys the models at scale to production. Apply best practices in MLOps, LLMOps, Data Science, and software engineering to ensure the delivery of clean, efficient, and reliable code. Aggregate huge amounts of data from disparate sources to discover patterns and features necessary to automate the analytical models. About Company Improva is a global IT solution provider and outsourcing company with contributions across several domains including FinTech, Healthcare, Insurance, Airline, Ecommerce & Retail, Logistics, Education, Insurance, Startups, Government & Semi-Government, and more. Show more Show less
Posted 2 days ago
1.0 - 3.0 years
1 - 2 Lacs
Nagercoil
Work from Office
Job Overview: We are looking for a skilled Python and Data Science Programmer to develop and implement data-driven solutions. The ideal candidate should have strong expertise in Python, machine learning, data analysis, and statistical modeling. Key Responsibilities: Data Analysis & Processing: Collect, clean, and preprocess large datasets for analysis. Machine Learning: Build, train, and optimize machine learning models for predictive analytics. Algorithm Development: Implement data science algorithms and statistical models for problem-solving. Automation & Scripting: Develop Python scripts and automation tools for data processing and reporting. Data Visualization: Create dashboards and visual reports using Matplotlib, Seaborn, Plotly, or Power BI/Tableau. Database Management: Work with SQL and NoSQL databases for data retrieval and storage. Collaboration: Work with cross-functional teams, including data engineers, business analysts, and software developers. Research & Innovation: Stay updated with the latest trends in AI, ML, and data science to improve existing models.
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... We are seeking a visionary and technically strong Senior AI Architect to join our Billing IT organization in driving innovation at the intersection of telecom billing, customer experience, and artificial intelligence. This leadership role will be pivotal in designing, developing, and scaling AI-led solutions that redefine how we bill our customers, improve their billing experience, and derive actionable insights from billing data. You will work closely with cross-functional teams to lead initiatives that transform customer-facing systems, backend data platforms, and software development practices through modern AI technologies. Key Responsibilities Customer Experience Innovation: Designing and implementing AI-driven enhancements to improve telecom customer experience, particularly in the billing domain. Leading end-to-end initiatives that personalize, simplify, and demystify billing interactions for customers. AI Tools and Platforms: Evaluating and implementing cutting-edge AI/ML models, LLMs, SLMs, and AI-powered solutions for use across the billing ecosystem. Developing prototypes and production-grade AI tools to solve real-world customer pain points. Prompt Engineering & Applied AI: Exhibiting deep expertise in prompt engineering and advanced LLM usage to build conversational tools, intelligent agents, and self-service experiences for customers and support teams. Partnering with design and development teams to build intuitive AI interfaces and utilities. AI Pair Programming Leadership: Demonstrating hands-on experience with AI-assisted development tools (e.g., GitHub Copilot, Codeium). Driving adoption of such tools across development teams, track measurable productivity improvements, and integrate into SDLC pipelines. Data-Driven Insight Generation: Leading large-scale data analysis initiatives using AI/ML methods to generate meaningful business insights, predict customer behavior, and prevent billing-related issues. Establishing feedback loops between customer behavior and billing system design. Thought Leadership & Strategy: Acting as a thought leader in AI and customer experience within the organization. Staying abreast of trends in AI and telecom customer experience; regularly benchmark internal initiatives with industry best practices. Architectural Excellence: Owning and evolve the technical architecture of AI-driven billing capabilities, ensuring scalability, performance, security, and maintainability. Collaborating with enterprise architects and domain leads to align with broader IT and digital transformation goals. Telecom Billing Domain Expertise: Bring deep understanding of telecom billing functions, processes, and IT architectures, including usage processing, rating, billing cycles, invoice generation, adjustments, and revenue assurance. Providing architectural guidance to ensure AI and analytics solutions are well integrated into core billing platforms with minimal operational risk. Where you'll be working... In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. What We’re Looking For... You’re energized by the prospect of putting your advanced expertise to work as one of the most senior members of the team. You’re motivated by working on groundbreaking technologies to have an impact on people’s lives. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of relevant experience required, demonstrated through one or a combination of work Strong understanding of AI/ML concepts, including generative AI, LLMs (Large Language Models) etc with the ability to evaluate and apply them to solve real-world problems in telecom and billing. Familiarity with industry-leading AI models and platforms (e.g., OpenAI GPT, Google Gemini, Microsoft Phi, Meta LLaMA, AWS Bedrock), and understanding of their comparative strengths, pricing models, and applicability. Ability to scan and interpret AI industry trends, identify emerging tools, and match them to business use cases (e.g., bill explainability, predictive analytics, anomaly detection, agent assist). Skilled in adopting and integrating third-party AI tools—rather than building from scratch—into existing IT systems, ensuring fit-for-purpose usage with strong ROI. Experience working with AI product vendors, evaluating PoCs, and influencing make-buy decisions for AI capabilities. Comfortable guiding cross-functional teams (tech, product, operations) on where and how to apply AI tools, including identifying appropriate use cases and measuring impact. Deep expertise in writing effective and optimized prompts across various LLMs. Knowledge of prompt chaining, tool-use prompting, function calling, embedding techniques, and vector search optimization. Ability to mentor others on best practices for LLM prompt engineering and prompt tuning. In-depth understanding of telecom billing functions: mediation, rating, charging, invoicing, adjustments, discounts, taxes, collections, and dispute management. Strong grasp of billing SLAs, accuracy metrics, and compliance requirements in a telcom environment. Proven ability to define and evolve cloud-native, microservices-based architectures with AI components. Deep understanding of software engineering practices including modular design, API-first development, testing automation, and observability. Experience in designing scalable, resilient systems for high-volume data pipelines and customer interactions. Demonstrated hands-on use of tools like GitHub Copilot, Codeium, AWS CodeWhisperer, etc. Strong track record in scaling adoption of AI pair programming tools across engineering teams. Ability to quantify productivity improvements and integrate tooling into CI/CD pipelines. Skilled in working with large-scale structured and unstructured billing and customer data. Proficiency in tools like SQL, Python (Pandas, NumPy), Spark, and data visualization platforms (e.g., Power BI, Tableau). Experience designing and operationalizing AI/ML models to derive billing insights, detect anomalies, or improve revenue assurance. Excellent ability to translate complex technical concepts to business stakeholders. Influential leadership with a track record of driving innovation, change management, and cross-functional collaboration. Ability to coach and mentor engineers, analysts, and product owners on AI technologies and best practices. Keen awareness of emerging AI trends, vendor platforms, open-source initiatives, and market best practices. Active engagement in AI communities, publications, or proof-of-concept experimentation. Even better if you have one or more of the following: A master’s degree If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Bentley Systems Position Summary The successful candidate will be responsible for designing, developing and maintaining cloud applications and implementing new features, services, and solutions for Asset Operations product line in Bentley Infrastructure Cloud. The job offers excellent benefits and great opportunity to learn cutting-edge technologies. Key Responsibilities Direct the technical path of projects, ensuring they align with business goals and adhere to industry best practices . Continuously improve with a team in an Agile, Continuous Integration, and Continuous Delivery software development process. Ensure best practices are used and constantly improved for the team's development process. Design and develop software solutions, including establishing architecture and standards. Establish and use metrics to show continuous improvement within and outside of the team. Participate in defining and interpreting feature requests, documenting those requests in operational specifications, and designing specific services and features for stability, security, usability, and maintainability. Identify opportunities for improvement and innovation in software development processes and technologies. Mentor junior engineers, provide technical guidance, and foster collaboration within the team. Collaborate with other teams (product management, UX/UI, QA) to ensure successful project outcomes. Qualifications And Required Knowledge Education: Bachelor’s or Master’s degree in computer science, software engineering Management background: 3+ years of people management experience with demonstrable training through courses, books, or other mediums. Technical background: 5+ years strong ASP.Net web frameworks and React coding experience Programming Experience: Robust knowledge of multiple programming languages, such as C#, HTML5, CSS, JavaScript, React, Typescript libraries Database: Solid understanding of database and persistence design. (PostgreSQL, MongoDB, SQL Server, Oracle) Solid understanding of Responsive Design, Object Oriented Design, Design Patterns and advanced programming concepts Strong knowledge of Operating systems, Debug, Builds and Bug Tracking Nice to have worked with Microsoft Azure Cloud Platform or consuming Azure Cloud services, Temporal Services Knowledge of Agile, CI, CD and DevOps processes Nice to have exposure to AI/ML and LLM Skills Or Abilities Leadership and influence skills to direct the activities of other Engineers and provide effective coaching and training Effective Problem-solving skills to determine the cause of bugs and resolve complaints Strong oral communication skills to train, coach and collaborate with other staff Conflict management techniques focusing on empathy and emotional intelligence Proven written communication skills to produce informative reports and build technical documentation Public speaking skills to give presentations to Software Engineers and the management team Ability to analyze internal business processes and establish the best approach using practical and pragmatic actions Strong sense of logic and engineering workflow Organization and delegation skills to break large projects down into milestones What We Offer: A great Team and culture – please see our Recruitment Video. An exciting career as an integral part of a world-leading software company providing solutions for architecture, engineering, and construction. Competitive Salary and benefits. The opportunity to work within a global and diversely international team. A supportive and collaborative environment. Colleague Recognition Awards. About Bentley Systems: Bentley Systems (Nasdaq: BSY) is the infrastructure engineering software company. We provide innovative software to advance the world’s infrastructure – sustaining both the global economy and environment. Our industry-leading software solutions are used by professionals, and organizations of every size, for the design, construction, and operations of roads and bridges, rail and transit, water and wastewater, public works and utilities, buildings and campuses, mining, and industrial facilities. Our offerings, powered by the iTwin Platform for infrastructure digital twins, include MicroStation and Bentley Open applications for modeling and simulation, Seequent’s software for geoprofessionals, and Bentley Infrastructure Cloud encompassing ProjectWise for project delivery, SYNCHRO for construction management, and AssetWise for asset operations. Bentley Systems’ 5,200 colleagues generate annual revenues of more than $1 billion in 194 countries. www.bentley.com Equal Opportunity Employer: Bentley is proud to be an equal opportunity employer and considers for employment all qualified applicants without regard to race, color, gender/gender identity, sexual orientation, disability, marital status, religion/belief, national origin, caste, age, or any other characteristic protected by local law or unrelated to job qualifications. About Bentley Systems: Bentley Systems (Nasdaq: BSY) is the infrastructure engineering software company. We provide innovative software to advance the world’s infrastructure – sustaining both the global economy and environment. Our industry-leading software solutions are used by professionals, and organizations of every size, for the design, construction, and operations of roads and bridges, rail and transit, water and wastewater, public works and utilities, buildings and campuses, mining, and industrial facilities. Our offerings, powered by the iTwin Platform for infrastructure digital twins, include MicroStation and Bentley Open applications for modeling and simulation, Seequent’s software for geoprofessionals, and Bentley Infrastructure Cloud encompassing ProjectWise for project delivery, SYNCHRO for construction management, and AssetWise for asset operations. Bentley Systems’ 5,200 colleagues generate annual revenues of more than $1 billion in 194 countries. www.bentley.com Equal Opportunity Employer: Bentley is proud to be an equal opportunity employer and considers for employment all qualified applicants without regard to race, color, gender/gender identity, sexual orientation, disability, marital status, religion/belief, national origin, caste, age, or any other characteristic protected by local law or unrelated to job qualifications. Show more Show less
Posted 2 days ago
7.0 - 12.0 years
18 - 22 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
data scientist engineer,AI/ML,Data collection,Architecture creation,Python,R,Data analysis,Panda, Numpy and Matplot Lib,Git,Tensorflow,Pytorch, Scikit-Learn, Keras,Cloud platform( AWS/ AZure/ GCP), Docker kubernetis, Big Data,Hadoop, Spark,
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Lead Backend Engineer About US FICO, originally known as Fair Isaac Corporation, is a leading analytics and decision management company that empowers businesses and individuals around the world with data-driven insights. Known for pioneering the FICO® Score, a standard in consumer credit risk assessment, FICO combines advanced analytics, machine learning, and sophisticated algorithms to drive smarter, faster decisions across industries. From financial services to retail, insurance, and healthcare, FICO's innovative solutions help organizations make precise decisions, reduce risk, and enhance customer experiences. With a strong commitment to ethical use of AI and data, FICO is dedicated to improving financial access and inclusivity, fostering trust, and driving growth for a digitally evolving world. The Opportunity “As a Lead backend Engineer on our Generative AI team, you will work at the frontier of language model applications, developing novel solutions for various areas of the FICO platform to include fraud investigation, decision automation, process flow automation, and optimization. We seek a highly skilled engineer with a strong foundation in digital product development, a zeal for innovation and responsible for deploying product updates, identifying production issues and implementing integrations. The backend engineer should thrive in agile, fast-paced environments, champion DevOps and CI/CD best practices, and consistently deliver scalable, customer-focused backend solutions. You will have the opportunity to make a meaningful impact on FICO’s platform by infusing it with next-generation AI capabilities. You’ll work with a team, leveraging skills to build solutions and drive innovation forward.”. What You’ll Contribute Design, develop, and maintain high-performance, scalable Python-based backend systems powering ML and Generative AI products. Collaborate closely with ML engineers, data scientists, and product managers to build reusable APIs and services that support the full ML lifecycle—from data ingestion to inference and monitoring. Take end-to-end ownership of backend services, including design, implementation, testing, deployment, and maintenance. Implement product changes across the SDLC: detailed design, unit/integration testing, documentation, deployment, and support. Contribute to architecture discussions and enforce coding best practices and design patterns across the engineering team. Participate in peer code reviews, PR approvals, and mentor junior developers by removing technical blockers and sharing expertise. Work with the QA and DevOps teams to enable CI/CD, build pipelines, and ensure product quality through automated testing and performance monitoring. Translate business and product requirements into robust engineering deliverables and detailed technical documentation. Build backend infrastructure that supports ML pipelines, model versioning, performance monitoring, and retraining loops. Engage in prototyping efforts, collaborating with internal and external stakeholders to design PoVs and pilot solutions. What We’re Seeking 8+ of software development experience, with at least 3 years in a technical or team leadership role. Deep expertise in Python, including design and development of reusable, modular API packages for ML and data science use cases. Strong understanding of REST and gRPC APIs, including schema design, authentication, and versioning. Familiarity with ML workflows, MLOps, and tools such as MLflow, FastAPI, TensorFlow, PyTorch, or similar. Strong experience building and maintaining microservices and distributed backend systems in production environments. Solid knowledge of cloud-native development and experience with platforms like AWS, GCP, or Azure. Familiarity with Kubernetes, Docker, Helm, and deployment strategies for scalable AI systems. Proficient in SQL and NoSQL databases and experience designing performant database schemas. Experience with messaging and streaming platforms like Kafka is a plus. Understanding of software engineering best practices, including unit testing, integration testing, TDD, code reviews, and performance tuning. Exposure to frontend technologies such as React or Angular is a bonus, though not mandatory. Experience integrating with LLM APIs and understanding of prompt engineering and vector databases. Exposure to Java or Spring Boot in hybrid technology environments will be a bonus. Excellent collaboration and communication skills, with a proven ability to work effectively in cross-functional, globally distributed teams. A bachelor’s degree in Computer Science, Engineering, or a related discipline, or equivalent hands-on industry experience. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Show more Show less
Posted 2 days ago
0 years
0 Lacs
India
Remote
🚨 To Be Considered: 👉 You must complete the task in our GitHub repo before we consider your application: 🔗 https://github.com/Techflipp/hiring-task-dashboard-451 🔴 Before You Apply — Must-Have Requirement We're hiring at TechFlipp — and we’re looking for a developer who can take initiative and work independently without relying on complete UI designs. You must be comfortable turning basic Figma wireframes into polished, functional interfaces, using your own strong UI/UX judgment. If you need pixel-perfect mockups to move forward, this position is not the right fit. This ability is crucial to our fast-paced development workflow, where efficiency, product thinking, and design intuition are highly valued. 🛠️ What You’ll Do: Develop and maintain full-stack web apps using Next.js and JavaScript/TypeScript Translate ideas and wireframes into clean, responsive, user-friendly interfaces Work independently with minimal supervision Write scalable, maintainable code Debug and optimize performance with a product-minded approach ✅ What We’re Looking For: Strong experience with Next.js and TypeScript Confident building full-stack features in production-grade apps Great UI/UX instincts — even with partial designs Comfortable using Git, REST APIs, and modern deployment tools Excellent communication and remote collaboration skills Bonus: Experience with SaaS products and cloud deployments 🌍 Why Join TechFlipp? At TechFlipp, we connect and optimize tech to boost profitability, reduce costs, and enhance security. We deliver tailored, locally hosted solutions in networking, software, and systems integration. Remote-first culture & flexible hours Work on real-world ML/computer vision projects Join a smart, fast-moving team solving meaningful problems 💼 Ready to apply? Start by completing the GitHub task: 🔗 https://github.com/Techflipp/hiring-task-dashboard-451 We can't wait to see what you build. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a significant rise in the demand for Machine Learning (ML) professionals in recent years. With the growth of technology companies and increasing adoption of AI-driven solutions, the job market for ML roles in India is thriving. Job seekers with expertise in ML have a wide range of opportunities to explore in various industries such as IT, healthcare, finance, e-commerce, and more.
The average salary range for Machine Learning professionals in India varies based on experience levels. Entry-level positions such as ML Engineers or Data Scientists can expect salaries starting from INR 6-8 lakhs per annum. With experience, Senior ML Engineers or ML Architects can earn upwards of INR 15-20 lakhs per annum.
The career progression in Machine Learning typically follows a path from Junior Data Scientist or ML Engineer to Senior Data Scientist, ML Architect, and eventually to a Tech Lead or Chief Data Scientist role.
In addition to proficiency in Machine Learning, professionals in this field are often expected to have skills in:
As you explore job opportunities in Machine Learning in India, remember to hone your skills, stay updated with the latest trends in the field, and approach interviews with confidence. With the right preparation and mindset, you can land your dream ML job and contribute to the exciting world of AI and data science. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.