Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Kanpur, Uttar Pradesh, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 2 days ago
0 years
0 Lacs
Meerut, Uttar Pradesh, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM
Posted 2 days ago
0.0 - 9.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 45479 Department Development Description & Requirements Senior Java Developer is responsible for architecting and developing advanced Java solutions. This role involves leading the design and implementation of microservice architectures with Spring Boot, optimizing services for performance and scalability, and ensuring code quality. The Senior Developer will also mentor junior developers and collaborate closely with cross-functional teams to deliver comprehensive technical solutions. Essential Duties: Lead the development of scalable, robust, and secure Java components and services. Architect and optimize microservice solutions using Spring Boot. Translate customer requirements into comprehensive technical solutions. Conduct code reviews and maintain high code quality standards. Optimize and scale microservices for performance and reliability. Collaborate effectively with cross-functional teams to innovate and develop solutions. Experience in leading projects and mentoring engineers in best practices and innovative solutions. Coordinate with customer and client-facing teams for effective solution delivery. Basic Qualifications: Bachelor’s degree in Computer Science or a related field. 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Preferred Qualifications Experience with Spark using Spring Boot. Familiarity with the C4 Software Architecture Model. Experience using tools like Lucidchart for architecture and flow diagrams. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). Take part in evaluation of new data tools, POCs and provide suggestions. Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM
Posted 2 days ago
50.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Data Analyst - AI& Bedrock Experience Required: 6-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire Job Description:- FAS - Data Analyst - AI & Bedrock Specialization About Us: We are seeking a highly experienced and visionary Data Analyst with a deep understanding of artificial intelligence (AI) principles and hands-on expertise with cutting-edge tools like Amazon Bedrock. This role is pivotal in transforming complex datasets into actionable insights, enabling data-driven innovation across our organization. Role Summary: The Lead Data Analyst, AI & Bedrock Specialization, will be responsible for spearheading advanced data analytics initiatives, leveraging AI and generative AI capabilities, particularly with Amazon Bedrock. With 5+ years of experience, you will lead the design, development, and implementation of sophisticated analytical models, provide strategic insights to stakeholders, and mentor a team of data professionals. This role requires a blend of strong technical skills, business acumen, and a passion for pushing the boundaries of data analysis with AI. Key Responsibilities: • Strategic Data Analysis & Insight Generation: o End-to-end data analysis projects, from defining business problems to delivering actionable insights that influence strategic decisions. o Utilize advanced statistical methods, machine learning techniques, and AI-driven approaches to uncover complex patterns and trends in large, diverse datasets. o Develop and maintain comprehensive dashboards and reports, translating complex data into clear, compelling visualizations and narratives for executive and functional teams. • AI/ML & Generative AI Implementation (Bedrock Focus): o Implement data analytical solutions leveraging Amazon Bedrock, including selecting appropriate foundation models (e.g., Amazon Titan, Anthropic Claude) for specific use cases (text generation, summarization, complex data analysis). o Design and optimize prompts for Large Language Models (LLMs) to extract meaningful insights from unstructured and semi-structured data within Bedrock. o Explore and integrate other AI/ML services (e.g., Amazon SageMaker, Amazon Q) to enhance data processing, analysis, and automation workflows. o Contribute to the development of AI-powered agents and intelligent systems for automated data analysis and anomaly detection. • Data Governance & Quality Assurance: o Ensure the accuracy, integrity, and reliability of data used for analysis. o Develop and implement robust data cleaning, validation, and transformation processes. o Establish best practices for data management, security, and governance in collaboration with data engineering teams. • Technical Leadership & Mentorship: o Evaluate and recommend new data tools, technologies, and methodologies to enhance analytical capabilities. o Collaborate with cross-functional teams, including product, engineering, and business units, to understand requirements and deliver data-driven solutions. • Research & Innovation: o Stay abreast of the latest advancements in AI, machine learning, and data analytics trends, particularly concerning generative AI and cloud-based AI services. o Proactively identify opportunities to apply emerging technologies to solve complex business challenges. Required Skills & Qualifications: • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. • 5+ years of progressive experience as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong portfolio of successful data-driven projects. • Proven hands-on experience with AI/ML concepts and tools, with a specific focus on Generative AI and Large Language Models (LLMs). • Demonstrable experience with Amazon Bedrock is essential, including knowledge of its foundation models, prompt engineering, and ability to build AI-powered applications. • Expert-level proficiency in SQL for data extraction and manipulation from various databases (relational, NoSQL). • Advanced proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) or R for data analysis, statistical modeling, and scripting. • Strong experience with data visualization tools such as Tableau, Power BI, Qlik Sense, or similar, with a focus on creating insightful and interactive dashboards. • Experience with cloud platforms (AWS preferred) and related data services (e.g., S3, Redshift, Glue, Athena). • Excellent analytical, problem-solving, and critical thinking skills. • Strong communication and presentation skills, with the ability to convey complex technical findings to non-technical stakeholders. • Ability to work independently and collaboratively in a fast-paced, evolving environment. Preferred Qualifications: • Experience with other generative AI frameworks or platforms (e.g., OpenAI, Google Cloud AI). • Familiarity with data warehousing concepts and ETL/ELT processes. • Knowledge of big data technologies (e.g., Spark, Hadoop). • Experience with MLOps practices for deploying and managing AI/ML models. Learn about building AI agents with Bedrock and Knowledge Bases to understand how these tools revolutionize data analysis and customer service.
Posted 2 days ago
0 years
0 Lacs
Agra, Uttar Pradesh, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 2 days ago
0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
About Us: Planet Spark is reshaping the EdTech landscape by equipping kids and young adults with future-ready skills like public speaking, and more. We're on a mission to spark curiosity, creativity, and confidence in learners worldwide. If you're passionate about meaningful impact, growth, and innovation—you're in the right place. Location: Gurgaon (On-site) Experience Level: Entry to Early Career (Freshers welcome!) Shift Options: Domestic | Middle East | International Working Days: 5 days/week (Wednesday & Thursday off) | Weekend availability required Target Joiners: Any (Bachelor’s or Master’s) 🔥 What You'll Be Owning (Your Impact): Lead Activation: Engage daily with high-intent leads through dynamic channels—calls, video consults, and more. Sales Funnel Pro: Own the full sales journey—from first hello to successful enrollment. Consultative Selling: Host personalized video consultations with parents/adult learners, pitch trial sessions, and resolve concerns with clarity and empathy. Target Slayer: Consistently crush weekly revenue goals and contribute directly to Planet Spark’s growth engine. Client Success: Ensure a smooth onboarding experience and transition for every new learner. Upskill Mindset: Participate in hands-on training, mentorship, and feedback loops to constantly refine your game. 💡 Why Join Sales at Planet Spark? Only Warm Leads: Skip the cold calls—our leads already know us and have completed a demo session. High-Performance Culture: Be part of a fast-paced, energetic team that celebrates success and rewards hustle. Career Fast-Track: Unlock rapid promotions, performance bonuses, and leadership paths. Top-Notch Training: Experience immersive onboarding, live role-plays, and access to ongoing L&D programs. Rewards & Recognition: Weekly shoutouts, cash bonuses, and exclusive events to celebrate your wins. Make Real Impact: Help shape the minds of tomorrow while building a powerhouse career today. 🎯 What You Bring to the Table: Communication Powerhouse: You can build trust and articulate ideas clearly in both spoken and written formats. Sales-Driven: You know how to influence decisions, navigate objections, and close deals with confidence. Empathy First: You genuinely care about clients’ goals and tailor your approach to meet them. Goal-Oriented: You’re self-driven, proactive, and hungry for results. Tech Fluent: Comfortable using CRMs, video platforms, and productivity tools. ✨ What’s in It for You? 💼 High-growth sales career with serious earning potential 🌱 Continuous upskilling in EdTech, sales, and communication 🧘 Supportive culture that values growth and well-being 🎯 Opportunity to work at the cutting edge of education innovation
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a talented Lead Data Scientist to join our Finance Analytics team. Your main responsibility will be to analyze financial data in order to derive insights, detect patterns, and propose automation opportunities within our finance and accounting departments. By leveraging your expertise in data engineering and advanced analytics, you will be transforming raw financial data into valuable insights. Collaborating with finance teams is crucial as you work towards understanding business requirements. Your key responsibilities will include: - Applying advanced analytics techniques to extract insights from financial datasets - Building and optimizing data pipelines using Python, Spark, and SQL for data preparation - Developing and implementing machine learning models to identify patterns and automation opportunities - Creating interactive dashboards and visualizations using BI tools to effectively communicate insights - Collaborating with finance teams to translate their data needs into analytical solutions - Identifying and tracking relevant metrics for meaningful business intelligence - Supporting data-driven decision making by presenting clear findings - Conducting exploratory data analysis to uncover trends in financial data - Mentoring junior data scientists on analytical techniques - Implementing statistical analysis methods for validating findings and ensuring data quality - Documenting methodologies, processes, and results for reproducibility and knowledge sharing We are looking for someone with: - 5-7 years of experience in data science or analytics, preferably with financial or business data exposure - Strong technical background in data engineering and pipeline development - Advanced proficiency in Python and experience with Spark for large-scale data processing - Experience working with data from Snowflake Data Lake or similar cloud-based data platforms - Demonstrated skill in building dashboards and visualizations using BI tools - Proficiency in SQL for data extraction and manipulation - Experience applying machine learning algorithms to solve business problems - Ability to communicate technical concepts to non-technical stakeholders - Understanding of basic financial concepts and metrics - Strong problem-solving skills and attention to detail - Bachelor's degree in computer science, Data Science, Statistics, or related technical field Desired additional qualifications include: - Experience working in cross-functional teams in fast-paced environments - Familiarity with agile methodologies and collaborative development practices - Experience with version control systems and collaborative coding - Knowledge of cloud computing platforms - Understanding of data governance and data quality best practices - Continuous learning mindset and staying updated with emerging data science technologies If you are interested in exploring this opportunity, please submit your CV and a cover letter explaining why you believe you are a perfect fit for this role. We will be reviewing applications during the application period and filling the vacancy once a suitable candidate is identified.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Data Scientist I at Dotdash Meredith, you will collaborate with the business team to understand problems, objectives, and desired outcomes. Your primary responsibility will be to work with cross-functional teams to assess data science use cases & solutions, lead and execute end-to-end data science projects, and collaborate with stakeholders to ensure alignment of data solutions with business goals. You will be expected to build custom data models with an initial focus on content classification, utilize advanced machine learning techniques to improve model accuracy and performance, and build necessary visualizations to interpret data models by business teams. Additionally, you will work closely with the engineering team to integrate models into production systems, monitor model performance in production, and make improvements as necessary. To excel in this role, you must possess a Masters degree (or equivalent experience) in Data Science, Mathematics, Statistics, or a related field with 3+ years of experience in ML/Data Science/Predictive-Analytics. Strong programming skills in Python and experience with standard data science tools and libraries are essential. Experience or understanding of deploying machine learning models in production on at least one cloud platform is required, and hands-on experience with LLMs API and the ability to craft effective prompts are preferred. It would be beneficial to have experience in the Media domain, familiarity with vector databases like Milvus, and E-commerce or taxonomy classification experience. In this role, you will have the opportunity to learn about building ML models using industry-standard frameworks, solving Data Science problems for the media industry, and the use of Gen AI in Media. This position is based in Eco World, Bengaluru, with shift timings from 1 p.m. to 10 p.m. IST. If you are a bright, engaged, creative, and fun individual with a passion for data science, we invite you to join our inspiring team at Dotdash Meredith India Services Pvt. Ltd.,
Posted 2 days ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior Test Automation Engineer Treasury and FP&A Technology is looking for a seasoned senior Test Automation Engineer to define, plan, and execute testing automation strategy for Global Funds Transfer Pricing Application. This role requires a hands-on leader who can architect robust automation frameworks, enhance testing efficiency, and ensure seamless software delivery of highest quality. The ideal candidate will bring expertise in automation tools, agile methodologies, and quality engineering best practices to transform and enhance current testing automation landscape. This candidate should have an engineering first mindset, with the ability to leverage latest technologies to achieve efficient and effective solutions. This candidate must be a developer first, and then a quality engineer . Responsibilities: Define, plan, and execute testing automation strategy for CitiFTP. Continuously monitor automation coverage and enhance the existing automation framework to increase the automation coverage. Design, Develop, and Implement scalable and maintainable automation frameworks for UI, API, and data validation testing on Big Data/Hadoop platform. Collaborate with other testing areas, development teams, product owners, and business partners to integrate automation into the agile SDLC. Enhance the efficiency of regression, and end-to-end testing using automation. Develop robust test scripts and maintain automation suites to support rapid software releases. Improve overall test coverage, defect detection, and release quality through automation. Establish and track key QA metrics e.g. defect leakage, test execution efficiency, automation coverage. Advocate for best practices in test automation, including code reviews, re-usability and maintainability. Drive the adoption of AI/ML-based testing tools and emerging trends in test automation. Manage, mentor, and upskill a team of test engineers in automation practices. Foster a culture of continuous learning and innovation within the testing community. Define career development paths and ensure team members stay up to date with industry advancements. Analyze trends at an organizational level to improve processes; follows and analyzes industry trends. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency, as well as effectively supervise the activity of others and create accountability with those who fail to maintain these standards. Qualifications: 12+ years of experience in functional and non-functional software testing 5+ years of experience as Test Automation Lead Expertise in test automation frameworks / tools like Jenkins, Selenium, Cucumber, TestNG, Junit, Cypress. Strong programming skills in Java, Python or any other programming or scripting language. Expertise in SQL. Experience with API testing tools (Postman, RestAssured) and performance testing tools (JMeter, LoadRunner) Expertise in build tools like Maven / Gradle, continuous integration tools like Jenkins, source management tools like Git/GitHub. Strong knowledge of Agile, Scrum, and DevOps practices. Strong knowledge of functional Test tool (JIRA). Familiarity with cloud-based test execution – AWS, Azure, or GCP. Familiarity with big data testing (Spark, HDFS) and database testing automation (Oracle, SQL). Preferred - Experience with AI-driven test automation and advanced test data management strategies. Candidate should have to curiosity to research and utilize various AI tools available in the space of test automation. Preferred – Certifications such as ISTQB Advanced, Certified Agile Tester, or Selenium WebDriver certification Exposure to banking / financial domains, particularly Treasury applications is a plus Requires communication and diplomacy skills and an ability to persuade and influence Hands on in code review, unit testing and integration testing Very Confident, innovative, self-motivated, aggressive and results oriented Ideal candidate should be passionate about automation in quality engineering Education: Bachelor’s/University degree, Master’s degree preferred ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Technology Quality ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 days ago
7.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
On-site
Job Overview We are seeking a highly skilled and experienced Lead Data Engineer AWS to spearhead the design, development, and optimization of our cloud-based data infrastructure. As a technical leader, you will drive scalable data solutions using AWS services and modern data engineering tools, ensuring robust data pipelines and architectures for real-time and batch data processing. Responsibilities The ideal candidate is a hands-on technologist with a deep understanding of distributed data systems, cloud-native data services, and team leadership in Agile Responsibilities : Design, build, and maintain scalable, fault-tolerant, and secure data pipelines using AWS-native services (e.g., Glue, EMR, Lambda, S3, Redshift, Athena, Kinesis). Lead end-to-end implementation of data architecture strategies including ingestion, storage, transformation, and data governance. Collaborate with data scientists, analysts, and application developers to understand data requirements and deliver optimal solutions. Ensure best practices for data quality, data cataloging, lineage tracking, and metadata management using tools like AWS Glue Data Catalog or Apache Atlas. Optimize data pipelines for performance, scalability, and cost-efficiency across structured and unstructured data sources. Mentor and lead a team of data engineers, providing technical guidance, code reviews, and architecture recommendations. Implement data modeling techniques (OLTP/OLAP), partitioning strategies, and data warehousing best practices. Maintain CI/CD pipelines for data infrastructure using tools such as AWS CodePipeline, Git, and Monitor production systems and lead incident response and root cause analysis for data infrastructure issues. Drive innovation by evaluating emerging technologies and proposing improvements to existing data platform Skills & Qualifications : Minimum 7 years of experience in data engineering with at least 3+ years in a lead or senior engineering role. Strong hands-on experience with AWS data services: S3, Redshift, Glue, Lambda, EMR, Athena, Kinesis, RDS, DynamoDB. Advanced proficiency in Python/Scala/Java for ETL development and data transformation logic. Deep understanding of distributed data processing frameworks (e.g., Apache Spark, Hadoop). Solid grasp of SQL and experience with performance tuning in large-scale environments. Experience implementing data lakes, lakehouse architecture, and data warehousing solutions on cloud. Knowledge of streaming data pipelines using Kafka, Kinesis, or AWS MSK. Proficiency with infrastructure-as-code (IaC) using Terraform or AWS CloudFormation. Experience with DevOps practices and tools such as Docker, Git, Jenkins, and monitoring tools (CloudWatch, Prometheus, Grafana). Expertise in data governance, security, and compliance in cloud environments (ref:hirist.tech)
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Please Read Carefully Before Applying Do NOT apply unless you have 3+ years of real-world, hands-on experience in the requirements listed below. Do NOT apply if you are not in Delhi or the NCR OR are unwilling to relocate. This is NOT a WFO opportunity. We work 5 days from office, so please do NOT apply if you are looking for hybrid or WFO. About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the world's largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computing-leveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. We're seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. You'll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2 - 8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)
Posted 2 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. We’re the team that developed the Mashup Engine (M) and Power Query. We already ship monthly to millions of users across Excel, Power/Pro BI, Flow, and PowerApps; but in many ways we’re just getting started. We’re building new services, experiences, and engine capabilities that will broaden the reach of our technologies to several new areas – data “intelligence”, large-scale data analytics, and automated data integration workflows. We plan to use example-based interaction, machine learning, and innovative visualization to make data access and transformation even more intuitive for non-technical users. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Engine layer: designing and implementing components for dataflow orchestration, distributed querying, query translation, connecting to external data sources, and script parsing/interpretation Service layer: designing and implementing infrastructure for a containerized, micro services based, high throughput architecture UI layer: designing and implementing performant, engaging web user interfaces for datavisualization/exploration/transformation/connectivity and dataflow management Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python OR equivalent experience Experience in data integration or migrations or ELT or ETL tooling is mandatory Preferred/Additional Qualifications BS degree in Computer Science Engine role: familiarity with data access technologies (e.g. ODBC, JDBC, OLEDB, ADO.Net, OData), query languages (e.g. T-SQL, Spark SQL, Hive, MDX, DAX), query generation/optimization, OLAP UI role: familiarity with JavaScript, TypeScript, CSS, React, Redux, webpack Service role: familiarity with micro-service architectures, Docker, Service Fabric, Azure blobs/tables/databases, high throughput services Full-stack role: a mix of the qualifications for the UX/service/backend roles Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be required to monitor and control all phases of the development process including analysis, design, construction, testing, and implementation. Additionally, you will provide user and operational support on applications to business users. Utilizing your in-depth specialty knowledge of applications development, you will analyze complex problems/issues, evaluate business and system processes, adhere to industry standards, and make evaluative judgments. You will also recommend and develop security measures post-implementation to ensure successful system design and functionality, consult with users/clients and other technology groups, recommend advanced programming solutions, and assist in the installation and exposure of customer systems. Ensuring essential procedures are followed, defining operating standards and processes, and serving as an advisor or coach to new or lower-level analysts will also be part of your role. As an Applications Development Senior Programmer Analyst, you must appropriately assess risks during business decisions, with a particular focus on the firm's reputation and safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, applying sound ethical judgment, and escalating, managing, and reporting control issues with transparency. Qualifications for this role include: - 9-12 years of relevant experience - Must have skills in Java, Spark, and Big data - Good to have skills in Kafka and Tableau - Experience in systems analysis and programming of software applications - Experience in managing and implementing successful projects - Working knowledge of consulting/project management techniques and methods - Ability to work under pressure, manage deadlines, and adapt to unexpected changes in expectations or requirements Education requirement: - Bachelor's degree/University degree or equivalent experience Please note that this job description provides a high-level overview of the work performed. Other job-related duties may be assigned as required.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You will be part of a team responsible for developing a next-generation Data Analytics Engine that converts raw market and historical data into actionable insights for the electronics supply chain industry. This platform processes high-volume data from suppliers, parts, and trends to provide real-time insights and ML-driven applications. We are seeking an experienced Lead or Staff Data Engineer to assist in shaping and expanding our core data infrastructure. The ideal candidate should have a strong background in designing and implementing scalable ETL pipelines and real-time data systems in AWS and open-source environments such as Airflow, Spark, and Kafka. This role involves taking technical ownership, providing leadership, improving our architecture, enforcing best practices, and mentoring junior engineers. Your responsibilities will include designing, implementing, and optimizing scalable ETL pipelines using AWS-native tools, migrating existing pipelines to open-source orchestration tools, leading data lake and data warehouse architecture design, managing CI/CD workflows, implementing data validation and quality checks, contributing to Infrastructure as Code, and offering technical mentorship and guidance on architectural decisions. To qualify for this role, you should have at least 8 years of experience as a Data Engineer or similar role with production ownership, expertise in AWS tools, deep knowledge of open-source data stack, strong Python programming skills, expert-level SQL proficiency, experience with CI/CD tools, familiarity with Infrastructure as Code, and the ability to mentor engineers and drive architectural decisions. Preferred qualifications include a background in ML/AI pipelines, experience with serverless technologies and containerized deployments, and familiarity with data observability tools and alerting systems. A Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field is preferred. In return, you will have the opportunity to work on impactful supply chain intelligence problems, receive mentorship from experienced engineers and AI product leads, work in a flexible and startup-friendly environment, and enjoy competitive compensation with opportunities for career growth.,
Posted 2 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Build cloud scale products with focus on efficiency, reliability and security Build and maintain end-to-end Build, Test and Deployment pipelines Deploy and manage massive Hadoop, Spark and other clusters Contribute to the architecture & design of the products Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, or related technical discipline AND 4+ years technical engineering experience with coding in languages like C#, React, Redux, TypeScript, JavaScript, Java or Python OR equivalent experience Experience in data integration or data migrations or ELT or ETL tooling is mandatory Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #azdat #azuredata #microsoftfabric #dataintegration Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
You should possess a minimum of 7-10 years of industry experience, out of which a minimum of 5 years should have been in machine learning roles. Your proficiency in Python and popular ML libraries such as TensorFlow, PyTorch, and Scikit-learn should be advanced. Furthermore, you should have hands-on experience in distributed training, model optimization including quantization and pruning, and inference at scale. Experience with cloud ML platforms like AWS (SageMaker), GCP (Vertex AI), or Azure ML is essential. It is expected that you are familiar with MLOps tooling such as MLflow, TFX, Airflow, or Kubeflow, and data engineering frameworks like Spark, dbt, or Apache Beam. A solid understanding of CI/CD for ML, model governance, and post-deployment monitoring (e.g., data drift, model decay) is crucial for this role. In addition to technical skills, problem-solving abilities, effective communication, and strong documentation skills are highly valued in this position.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a highly skilled and motivated Technical Lead who will be joining our growing team. Your role will involve leading and delivering complex technical projects in areas such as AI/ML, Full Stack Development, and Cloud-based solutions. Your responsibilities will include overseeing the end-to-end execution of multiple software development projects, focusing on AI/ML initiatives and products to ensure timely delivery and high quality. You will be responsible for architecture design, technical planning, and code quality across the team, specifically for scalable AI/ML solutions, robust data pipelines, and integration of models into production systems. Collaboration with stakeholders, both internal and external, will be a key part of your role. You will gather requirements for AI/ML features, provide progress updates, and effectively manage expectations. Mentoring and guiding developers to foster a culture of continuous improvement and technical excellence, especially in AI/ML best practices, model development, and ethical AI considerations, will be essential. You will work closely with cross-functional teams, including QA, DevOps, and UI/UX designers, to seamlessly integrate AI/ML models and applications into broader systems. Implementing best practices in development, deployment, and version control with a strong emphasis on MLOps and reproducible AI/ML workflows is crucial. Tracking project milestones, managing technical risks, and ensuring that AI/ML projects align with overarching business goals will be part of your responsibilities. Participating in client calls to provide technical insights and solution presentations, demonstrating the value and capabilities of our AI/ML offerings, will be required. Driving research, experimentation, and adoption of cutting-edge AI/ML algorithms and techniques to enhance product capabilities is also expected from you. Required Skills: - Strong hands-on experience in at least one Fullstack framework (e.g., MERN stack, Python with React). - Proven experience managing and delivering end-to-end AI/ML projects or products. - Proficiency in major AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn. - Solid experience with data processing, feature engineering, and data pipeline construction for machine learning workloads. - Proficiency in project tracking tools like Jira, Trello, or Asana. - Solid understanding of SDLC, Agile methodologies, and CI/CD practices. - Strong knowledge of cloud platforms like AWS, Azure, or GCP, especially their AI/ML services. - Excellent problem-solving, communication, and leadership skills. Preferred Qualifications: - Bachelor's or Master's degree in a related field. - Experience with containerization technologies and microservices architecture. - Exposure to MLOps practices and tools. - Prior experience in a client-facing technical leadership role. - Familiarity with big data technologies. - Contributions to open-source AI/ML projects or relevant publications are a plus. (ref:hirist.tech),
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Join the leader in entertainment innovation and help design the future at Dolby. At Dolby, science meets art, and high tech means more than computer code. As a member of the Dolby team, you'll see and hear the results of your work everywhere, from movie theaters to smartphones. Dolby continues to revolutionize how people create, deliver, and enjoy entertainment worldwide. To achieve this, Dolby seeks the absolute best talent. Dolby offers a collegial culture, challenging projects, excellent compensation and benefits, and a Flex Work approach that is truly flexible to support where, when, and how you do your best work. At Dolby, the aim is to change the way the world experiences sight and sound. Dolby enables people to experience music, movies, videos, and pictures in all their intended grandeur, making life and work more meaningful and immersive. Dolby provides technology to content creators, owners, distributors, manufacturers of TV, mobile, and PC, as well as social and media platforms, so they can truly delight their customers. Advanced Technology Group (ATG) is the research and technology arm of Dolby Labs, focusing on innovating technologies in audio, video, AR/VR, gaming, music, and movies. Various areas of expertise related to computer science and electrical engineering are highly relevant to the research conducted by ATG. As a talented Applied Researcher at Dolby, you will have the opportunity to advance the state of the art in technologies of interest to Dolby and society at large. Research areas at Dolby Laboratories include large-scale cloud and edge data platforms and services, accelerating insight discovery from data, and topics such as distributed systems, stream processing, edge computing, applied machine learning and AI, big graphs, natural language processing, big data management, and heterogeneous data analytics. Key Responsibilities: - Develop platforms and tools to enable interactive and immersive data-driven experiences utilizing AI-based techniques. - Deploy AI/ML training and inference algorithms in distributed computing environments. - Partner with ATG researchers on opportunities in adjacent research domains such as applied AI and machine learning in audio/video domains. Requirements for Success: - Technical depth: Ability to implement scalable AI/ML libraries and platforms for real-time processing and knowledge of Audio/Video streaming formats. - Openness to explore new technologies and innovate in new areas. - Ability to invent and innovate technologies that enhance the sight and sound associated with digital content consumption. - Sense of urgency to respond to changing trends and technologies. - Collaborative mindset to work with peers and external partners to develop industry-leading technologies. Background: - PhD in Computer Science or related field with proven R&D experience or exceptional Master's candidates with 4+ years of experience. - Expertise in deep learning frameworks and ML libraries such as TensorFlow, PyTorch, scikit-learn, Spark MLLib. - Experience in large-scale distributed systems like Hadoop, Spark, etc. - Proficiency in Python, C++, or related languages. - Strong analytical, problem-solving, communication, and presentation skills. - Experience with AWS, GCP, DevOps, CI/CD, and UNIX/Linux commands. - Strong publication record in leading IEEE, ACM conferences and journals. Join Dolby and be part of a team that is shaping the future of entertainment technology with innovative research and cutting-edge solutions.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Azure Databricks. This includes implementing data integration from various sources such as RDBMS, ERP systems, and files. You will design and optimize SQL queries, stored procedures, and relational schemas. Additionally, you will build stream-processing systems using technologies like Apache Storm or Spark-Streaming, and utilize messaging systems like Kafka or RabbitMQ for data ingestion. Performance tuning of Spark jobs for optimal efficiency will be a key focus area. Collaboration with cross-functional teams to deliver high-quality data solutions is essential in this role. You will also lead and mentor a team of data engineers, fostering a culture of continuous improvement and Agile practices. Key skills required for this position include proficiency in Apache Spark and Azure Databricks, strong experience with Azure ecosystem and Python, as well as working knowledge of Pyspark (Nice-to-Have). Experience in data integration from varied sources, expertise in SQL optimization and stream-processing systems, familiarity with Kafka or RabbitMQ, and the ability to lead and mentor engineering teams are also crucial. A strong understanding of distributed computing principles is a must. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field.,
Posted 2 days ago
2.0 - 8.0 years
0 Lacs
haryana
On-site
You will be part of Maruti Suzuki's Analytics Centre of Excellence (ACE) CoE team as a Data Scientist. Your responsibilities will include designing and implementing workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python. You should have demonstrable competency in Probability and Statistics, with the ability to use ideas of Data Distributions, Hypothesis Testing, and other Statistical Tests. Experience in handling outliers, denoising data, and managing the impact of pandemic-like situations will be crucial. Additionally, you will be expected to perform Exploratory Data Analysis (EDA) of raw data, conduct feature engineering where applicable, and showcase competency in Data Visualization using the Python/R Data Science Stack. Leveraging cloud platforms for training and deploying large-scale solutions, as well as training and evaluating ML models using various machine learning and deep learning algorithms, will be part of your role. You will also need to retrain and maintain model accuracy in deployment and package & deploy large-scale models on on-premise systems using multiple approaches including docker. Taking complete ownership of the assigned project, working in Agile environments, and being well-versed with project tracking tools like JIRA or equivalent will be expected. Your competencies should include knowledge of cloud platforms (AWS, Azure, and GCP), exposure to NoSQL databases (MongoDB, Cassandra, Cosmos DB, HBase), and forecasting experience in products like SAP, Oracle, Power BI, Qlik, etc. Proficiency in Excel (Power Pivot, Power Query, Macros, Charts), experience with large datasets and distributed computing (Hive/Hadoop/Spark), and transfer learning using state-of-the-art models in different spaces such as vision, NLP, and speech will be beneficial. Integration with external services and Cloud API, as well as working with data annotation approaches and tools for text, images, and videos, will also be part of your responsibilities. The ideal candidate should have a minimum of 2 years and a maximum of 8 years of work experience, along with a Bachelor of Technology (B.Tech) or equivalent educational qualification.,
Posted 2 days ago
2.0 - 31.0 years
1 - 10 Lacs
Work From Home
Remote
Job Title: Work-from-Home Telecaller (Cold Calling Specialist) – Drive Gomini's Growth Revolution About Gomini At Gomini, we're more than a business – we're a movement. We preserve India's indigenous cow breeds while creating sustainable livelihoods for rural communities through innovative panchgavya products (from milk to natural wellness items). Our model turns cows into assets that generate real income, helping families thrive without leaving their villages. We've empowered over 100 entrepreneurs and are scaling fast. If you believe in building a better Bihar and beyond, join us to cold call your way to impact and earnings from home. Job Overview We're hiring driven Telecallers for pure cold calling to generate leads and grow Gomini across India. This is a full-time, work-from-home role with flexible hours (40-50 hours/week, including evenings/weekends for best reach). You'll make outbound calls to cold contacts (business lists, directories, etc.), introduce Gomini's story, and turn interest into qualified leads or closures. High volume, high reward – we provide scripts and tools, but success comes from your hustle. Earn a minimum fixed salary with very high incentives based on conversions. Perfect for resilient communicators who love the thrill of building from zero. Key Responsibilities Your focus is on high-volume cold calling to spark interest and close deals. Here's exactly what you'll do: Make 100+ outbound cold calls per day to targeted lists (e.g., potential investors, urban professionals, or businesses interested in agri opportunities) using our proven scripts. Introduce Gomini confidently: Share how people can invest in dairy units, adopt cows for passive income – make it simple, exciting, and tailored to their needs. Handle objections smoothly, like "I'm not interested" or "Tell me more," by asking questions and highlighting benefits (e.g., 20% returns, rural impact). Qualify prospects by gauging interest, budget, and fit, then book follow-ups, demos, or direct closures over the phone. Close small opportunities (e.g., single cow adoptions) on the spot and log all interactions in our CRM for tracking incentives. Follow up persistently with promising leads via calls, WhatsApp, or email to nurture them to commitment. Hit daily/weekly targets: Aim for 10-20 qualified leads or 3-5 closures per week to unlock top incentives. Share insights from calls (e.g., what hooks work best) to refine our approach. It's all about persistence and genuine conversations – we'll train you to turn "no" into "tell me more." Requirements To excel in cold calling, you should have: Outstanding spoken Hindi, English and one or more regional languages. The leads data would be shared accordingly. Confident, persuasive voice with resilience to handle rejections – you enjoy the challenge of winning people over. Basic setup: Reliable internet (at least 10 Mbps), quiet workspace, headset, and smartphone for calls/CRM. Self-motivation for remote work: Ability to stay disciplined, log calls, and push through slow days. Passion for sales or rural causes – bonus if you're from rural background and had cows in your house and understand local issues like job scarcity. At least 2 year in cold calling, sales, or telemarketing (freshers okay with proven communication skills and grit). Availability for 6 days a week, with flexible shifts (e.g., 10 AM-6 PM or evenings for peak times). No degree needed – we want hunters who deliver results. Compensation and Benefits Minimum Fixed Salary: ₹10,000 per month (your safety net, paid on time regardless of results). Very High Incentives: Unlimited potential – (₹300 - ₹1,000 per qualified lead, ₹11,000 per closure). Top performers hit ₹1 lakh+ total. Weekly incentive payouts to fuel your momentum. Work-from-home freedom: No office, flexible breaks, and build your day around call targets. Full training: 1-week paid session on scripts, objection handling, and Gomini's story, plus ongoing support. Career growth: Standout callers can advance to lead roles or specialized teams as we expand. Real purpose: Every call creates opportunities in rural India and preserves our Indigenous cows – fight joblessness while building your bank. How to Apply Send us your voice note of 1-2 minutes about your background and what do you think about https://gomini.in as a mission, on our whatsapp : wa.aisensy.com/+918170905222
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough