HYRGPT

27 Job openings at HYRGPT
Regional Sales Manager Mumbai,Maharashtra,India 0 years Not disclosed On-site Full Time

Who we’re looking for: A sales leader with a solid network among doctors in Mumbai , passionate about eldercare, and proven in driving revenue and partnerships in healthcare/homecare sectors. Key Responsibilities: Own P&L and revenue growth for the region. Lead and mentor a team of BDMs & Sr. Sales Managers. Drive doctor/hospital partnerships for lead generation and conversion. Develop and execute territory-specific sales strategies. Analyze sales data and optimize performance using MIS. You’ll excel if you have: 10+ years in field sales , 7+ in leadership with P&L. Deep experience in healthcare sales or partnerships. Strong Mumbai-based network with doctors and hospitals. MBA (Preferred). Location: Mumbai Show more Show less

Sales Development Representative New Delhi,Delhi,India 1 - 3 years None Not disclosed On-site Full Time

Join Our Team as a Sales Development Representative (SDR) -UK and Europe Market Are you a skilled sales professional with a proven track record in the Saas industry? Job Description: As an SDR focused on the UK market, you will be instrumental in driving revenue growth through proactive outreach by generating and qualifying leads, nurturing relationships, and building a robust sales pipeline. The current role is for the AI product Zoi AI. Key Responsibilities: Conduct outbound sales prospecting activities to identify and qualify potential customers in the overseas market (UK preferable). Engage with prospects via email, social media (LinkedIn), tele, and other outbound campaigns to understand their needs and present our Saas solutions effectively. Write compelling and personalised emails and LinkedIn messages to engage and nurture leads. Conduct product demos to prospects and clearly articulate value propositions. Collaborate closely with the sales, marketing & IT teams to understand solution proposition, execute targeted campaigns and optimize lead engagement strategies. Build and maintain a consistent pipeline of qualified opportunities. Utilize CRM tools to manage lead interactions, track progress, and report on sales activities. Achieve and exceed quarterly and annual sales targets. Qualifications: 1-3 years of experience in sales development or inside sales within the UK market, preferably in the Saas industry. Full time graduation in any stream is mandatory. Proven ability to generate and convert leads to qualified opportunities. Strong communication and interpersonal skills to provide product demonstration, with the ability to articulate value propositions clearly. Self-motivated, goal-oriented, and capable of working independently. Proficiency in using CRM software (e.g.HubSpot) and sales prospecting tools.

Artificial Intelligence Engineer - LLM Models Delhi,Delhi,India 0 years None Not disclosed On-site Full Time

Artificial Intelligence : 2-5 : Delhi Skills Were looking for sharp AI developers with hands-on experience in AI development, ideally with a strong foundation in Python (78/10 skill level) and practical exposure to AI/ML concepts and OpenAI agent development. Candidates should possess strong logical thinking, prompt engineering skills, and clear communication abilities. This role offers the opportunity to work on real-world AI agent applications in a fast-growing SaaS platform focused on sales and productivity tools for consulting and financial Full-time B.Tech/BE in Computer Science or related Youll Do : Build intelligent AI agents using OpenAI and LLMs. Design APIs and backend logic to integrate AI into SaaS workflows. Create AI-driven features in Python (Django/Flask). Work on CRM and third-party tool integrations. Continuously improve agent capabilities using prompt engineering and user Should Have : Strong Python coding skills (78/10 level). Hands-on experience with OpenAI APIs and LLMs. Understanding of AI/ML models and architectures. Prompt-writing capability with logical clarity. Familiarity with REST APIs, GitHub, and agile workflows. Bonus: Exposure to React, Mongo/PostgreSQL, GraphQL, or CRM systems. (ref:hirist.tech)

Intersystem IRIS Developer - FHIR/Cache ObjectScript Greater Kolkata Area 3 - 5 years None Not disclosed On-site Full Time

Job Description We are seeking talented and motivated Intersystem IRIS Developers to join our growing team at our USI locations. The ideal candidates will have strong hands-on experience in the Intersystem IRIS Data Platform and be proficient in designing, developing, and optimizing healthcare or enterprise-grade solutions. Key Responsibilities Design and develop applications using Intersystem IRIS, ObjectScript, and SQL Build and integrate RESTful and SOAP-based web services Work with FHIR standards for healthcare data integration and interoperability Conduct data modeling, schema design, and performance tuning Deploy and manage IRIS environments on Linux/Windows platforms Collaborate with cross-functional teams to deliver high-quality solutions Provide technical guidance and troubleshooting support Required Skills 3-5 years of hands-on experience with Intersystem IRIS Data Platform Strong knowledge of ObjectScript and SQL Proficiency in building and integrating REST/SOAP APIs FHIR implementation experience is a must Experience in data modeling and performance tuning Familiarity with deploying solutions on Linux/Windows Nice-to-Have Skills Working knowledge of Python, Java, or .NET Experience with Docker containerization (ref:hirist.tech)

Data Engineer - Snowflake DB Chennai,Tamil Nadu,India 3 - 5 years None Not disclosed On-site Full Time

Job : Snowflake Data Engineer Experience : 3 -5 YRs Location : Hyderabad, Bangalore, Mumbai, Gurgaon, Pune, Chennai, Kolkata Employment Type : Full-Time Job Summary We are seeking a skilled Snowflake Data Engineer with 3- 5 years of hands-on experience in designing, developing, and optimizing cloud-based data warehousing solutions. This position offers a compelling opportunity to work on a flagship data initiative for one of our premier Big 4 consulting clients, providing ample scope for technical innovation, learning, and career growth. Key Responsibilities Design & Develop High-Performance Data Pipelines : Build scalable and efficient Snowflake pipelines for data ingestion, transformation, and storage. Special focus on external tables, semi-structured data handling, and transformation logic. Optimize Snowflake Workloads Ensure optimal query execution and cost-effective utilization of compute and storage resources. Tune performance across large-scale datasets and implement workload management strategies. ETL/ELT Development Develop robust ETL processes using SQL, Python, and orchestration tools like DBT, Apache Airflow, Matillion, or Talend. Focus on automation, data transformation, and pipeline reliability. Integrate With AWS Glue Utilize AWS Glue capabilities such as crawlers, jobs, and external tables for seamless integration with Snowflake. Ensure consistent and automated data ingestion and cataloging. Data Governance & Security Enforce data governance, role-based access control, and compliance protocols within Snowflake. Ensure secure handling of sensitive data and privacy adherence. Handle Diverse Data Formats Ingest and transform structured and semi-structured formats like JSON, Parquet, Avro, XML, etc. Enable flexibility in data consumption across reporting and analytics. Data Modeling Design dimensional models including fact and dimension tables optimized for Snowflake architecture. Enable efficient querying and integration with BI tools. Cross-Functional Collaboration Partner with business stakeholders, data analysts, and BI developers to align technical solutions with business needs. Translate business requirements into scalable data solutions. Pipeline Monitoring & Issue Resolution Monitor end-to-end data workflows and ensure system reliability. Troubleshoot failures and performance bottlenecks proactively. Key Skills & Qualifications Hands-on experience with Snowflake development and architecture. Proficiency in SQL, Python, and cloud-native ETL/ELT tools. Experience with AWS Glue, S3, and Snowflake integration. Strong knowledge of data modeling, performance tuning, and cost optimization. Familiarity with handling semi-structured data. Good understanding of data governance, access control, and security best practices. Excellent problem-solving and communication skills. Nice To Have Experience working with Big 4 consulting clients or large enterprise environments. Exposure to DevOps practices, CI/CD pipelines, and data quality framework (ref:hirist.tech)

AWS Data Engineer - ETL/Python Greater Kolkata Area 0 years None Not disclosed On-site Full Time

AWS Data Engineer Design and optimize scalable, cloud-native data pipelines on AWS. Build and manage ETL workflows using SQL, Python, and modern orchestration tools. Work with both structured and semi-structured data formats, including JSON, Parquet, and Avro. Collaborate with BI and analytics teams to deliver data models that power dashboards and reporting tools. Ensure pipeline performance, cost-efficiency, and data security within a governed AWS environment. Support query performance tuning, data transformations, and automation. Participate in real-time data streaming and event-driven architectures using AWS-native Youll Use : Strong Knowledge of AWS Glue, Athena, Lambda, Step Functions Knowledge of Redshift, S3, EC2, EMR, Kinesis Hands on experience in SQL (query tuning & scripting) Hands on experience in Python, DBT, Airflow, Add-ons : CI/CD for data pipelines using CodePipeline / CodeBuild Experience with real-time/streaming architectures Strong problem-solving and cloud architecture skills (ref:hirist.tech)

Oracle Data Integrator Lead - ETL Pipeline Greater Kolkata Area 6 years None Not disclosed On-site Full Time

Key Responsibilities Oracle Data Integrator (ODI) ETL Lead : Lead the design, development, and deployment of ETL workflows using Oracle Data Integrator (ODI). Build and optimize complex ODI mappings, load plans, and custom Knowledge Modules (KMs). Contribute to multiple full-cycle data warehouse implementations, ensuring timely and quality delivery. Implement advanced ETL solutions including delta extracts, parameterization, data auditing, and error handling. Manage ODI repositories, agents, and topologies; monitor and optimize ETL performance. Perform data migrations leveraging SQL Loader and ODI export/import utilities. Ensure ETL pipelines are automated, scalable, and reliable to support enterprise data needs. Collaborate with cross-functional teams to integrate ODI with diverse source and target Qualifications & Skills : 6+ years of ETL development experience, with 34 years focused on Oracle Data Integrator (ODI). Strong expertise in Oracle PL/SQL, ETL design principles, and ODI component management. Hands-on experience with ODI Master and Work repositories, custom Knowledge Module development, and performance tuning. Proven ability to integrate ODI across various data sources and target systems. Excellent problem-solving skills and ability to troubleshoot complex ETL workflows. Strong communication and collaboration Skills : Exposure to cloud-based data platforms (e.g., AWS, Azure, GCP). Familiarity with DevOps practices applied to ETL development. Experience with real-time data processing and streaming architectures. (ref:hirist.tech)

Snowflake Engineer - ETL/AWS Glue Greater Kolkata Area 5 years None Not disclosed On-site Full Time

Snowflake : 5- 10 years : Anywhere in Responsibilities : Design and develop scalable, high-performance data pipelines in Snowflake. Lead ETL/ELT development using tools such as SQL, Python, DBT, Airflow, Matillion, or Talend. Migrate complex T-SQL logic and stored procedures from SQL Server to Snowflake-compatible SQL or ELT workflows. Integrate AWS Glue to automate and orchestrate data workflows. Work with structured and semi-structured data formats (e.g., JSON, Parquet, Avro, XML). Optimize Snowflake performance and cost through effective query tuning and warehouse resource management. Design data models that support business intelligence and analytics use cases. Ensure high standards of data quality, validation, and consistency during migration and transformation processes. Enforce data governance, security, and access control policies to ensure compliance with organizational standards. Collaborate with data architects, business stakeholders, and analytics teams to understand requirements and deliver data solutions. Maintain up-to-date technical documentation including data flows, mapping specifications, and operational Skills & Qualifications : 5+ years of experience in data engineering or a similar role. Hands-on experience with Snowflake and cloud-based data platforms (AWS preferred). Strong expertise in SQL and at least one scripting language (preferably Python). Experience with ETL/ELT tools like DBT, Airflow, Matillion, or Talend. Familiarity with AWS Glue and other cloud-native data services. Proven ability to work with semi-structured data. Solid understanding of data modeling, data warehousing concepts, and BI tools. Strong focus on performance tuning, data validation, and data quality. Excellent communication and documentation skills. (ref:hirist.tech)

Technical Lead - Frontend Architecture Chennai,Tamil Nadu,India 6 - 8 years None Not disclosed On-site Full Time

Job Description Collaborate with internal team during design, implementation and testing of enterprise software products. Work with different stakeholders in understanding customer requirements and developing solutions to address these requirements. Involve in UI design and development, unit tests and integration tests. Should have used versioning tools and should have a clear understanding of product life cycle. Understanding of Agile methodology is a plus Qualification 6 - 8 years of experience in software development/support of software products Computer Science or equivalent engineering (BE/MTech/MCA) / B.Sc graduate from reputed university/college with good academic records Should have experience working on front end framework applications using Angular 7 and Angular 10 & above. Strong understanding of HTML5, CSS3, and JavaScript/TypeScript. Proficient in responsive and mobile-first design principles Familiarity with RESTful APIs and asynchronous request handling Experience with version control systems (e.g., Git) and CI/CD pipelines Knowledge of browser developer tools for debugging and performance optimization Should possess strong analytical skills and be able to quickly adapt to a changing environment. Excellent debugging, troubleshooting skills on web interface and services etc. Ability to quickly learn new concepts and software is necessary Desired Skill Set Primary Skill : Angular 7 (mandatory) and Angular 10 and above, HTML 5.0, CSS 3.0, Skill : SQL Server, Cloud deployment, Micro frontend (ref:hirist.tech)

Big Data Specialist Chennai,Tamil Nadu,India 7 - 9 years None Not disclosed On-site Full Time

Job Title: Senior Cloud Systems Engineer (Big Data) Technology: Google/AWS/Azure public cloud platform, Big Query / Airflow / Data Flow/Dataproc/ PySpark, Terraform, Ansible, Jenkins, Linux and Git. Location: Chennai/Hyderabad, India Experience: Overall 7 to 9 years Job Summary: We are seeking a Senior Big Data Systems Engineer who is interested to lead a high performing team of Cloud engineers/data engineers for a larger Big Data upstream environment hosted on On-Prem and cloud. The candidate should possess in-depth knowledge in RedHat on Google cloud. Job Description: Skills, Roles and Responsibilities (Google/AWS/Azure public cloud, PySpark, Big Query and Google Airflow) Participate in 24x7x365 SAP Environment rotational shift support and operations As a team lead you will be responsible for maintaining the upstream Big Data environment day in day out where millions of financial data flowing through, consists of PySpark, Big Query, Dataproc and Google Airflow You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority Manage the operations team in your respective shift. You will be making changes to the underlying systems This role involves providing day-to-day support, enhancing platform functionality through DevOps practices, and collaborating with application development teams to optimize database operations. Architect and optimize data warehouse solutions using BigQuery to ensure efficient data storage and retrieval. Install/build/patch/upgrade/configure big data applications Manage and configure BigQuery environments, datasets, and tables. Ensure data integrity, accessibility, and security in the BigQuery platform. Implement and manage partitioning and clustering for efficient data querying. Define and enforce access policies for BigQuery datasets. Implement query usage caps and alerts to avoid unexpected expenses. Should be very comfortable with troubleshooting Linux-based systems on issues and failures with good grasp of the Linux command line. Create and maintain dashboards and reports to track key metrics like cost, performance. Integrate BigQuery with other Google Cloud Platform (GCP) services like Dataflow, Pub/Sub, and Cloud Storage. Enable BigQuery through tools like Jupyter notebook, Visual Studio code, other CLI's. Implement data quality checks and data validation processes to ensure data integrity. Manage and monitor data pipelines using Airflow and CI/CD tools (e.g., Jenkins, Screwdriver) for automation. Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions. Provide consultation and support to application development teams for database design, implementation, and monitoring. Proficiency in Unix/Linux OS fundamentals, shell/perl/python scripting, and Ansible for automation. Disaster Recovery & High Availability. Expertise in planning and coordinating disaster recovery principles, including backup/restore operations. Experience with geo-redundant databases and Red Hat cluster. Accountable for ensuring that delivery is within the defined SLA and agreed milestones (projects) by following best practices and processes for continuous service improvement. Work closely with other Support Organizations (DB, Google, PySpark data engineering and Infrastructure teams). Incident Management, Change Management, Release Management and Problem Management processes based on ITIL Framework.

Oracle Retail Consultant karnataka 4 - 8 years INR Not disclosed On-site Full Time

You are a skilled Oracle Retail Technical Consultant with at least 3-8 years of experience, seeking to join a dynamic team to contribute to exciting integration projects. Your primary responsibility will be to work on various integration projects using your expertise in Oracle Retail Technical Integration and Data Flow Architecture. Your key skills and expertise will include a strong understanding of Oracle Retail Technical Integration & Data Flow Architecture, proficiency in integration patterns such as File-based, RFI, and API-based, experience in integrating MFCS with boundary applications through middleware or point-to-point connections, advanced PL/SQL skills, and APEX knowledge (preferable). Additionally, you should have a good grasp of Oracle Retail modules and functionalities. If you are enthusiastic about Oracle Retail technology and enjoy taking on challenging projects, we are eager to have you join our team. Apply now or reach out for more details.,

Data Engineer - Snowflake DB chennai,tamil nadu 3 - 7 years INR Not disclosed On-site Full Time

As a Snowflake Data Engineer with 3-5 years of experience, you will be responsible for designing, developing, and optimizing cloud-based data warehousing solutions. This is an exciting opportunity to work on a flagship data initiative for a premier Big 4 consulting client, offering ample scope for technical innovation, learning, and career growth. Your key responsibilities will include: - Designing and developing high-performance data pipelines in Snowflake for data ingestion, transformation, and storage. You will focus on external tables, semi-structured data handling, and transformation logic. - Optimizing Snowflake workloads to ensure optimal query execution and cost-effective utilization of compute and storage resources. You will tune performance across large-scale datasets and implement workload management strategies. - Developing robust ETL processes using SQL, Python, and orchestration tools like DBT, Apache Airflow, Matillion, or Talend. Automation, data transformation, and pipeline reliability will be your focus. - Integrating with AWS Glue by utilizing capabilities such as crawlers, jobs, and external tables for seamless integration with Snowflake. You will ensure consistent and automated data ingestion and cataloging. - Enforcing data governance, role-based access control, and compliance protocols within Snowflake to ensure secure handling of sensitive data and privacy adherence. - Handling diverse data formats including structured and semi-structured formats like JSON, Parquet, Avro, XML, etc., to enable flexibility in data consumption across reporting and analytics. - Designing dimensional models optimized for Snowflake architecture, including fact and dimension tables, to enable efficient querying and integration with BI tools. - Collaborating with business stakeholders, data analysts, and BI developers to translate business requirements into scalable data solutions. - Monitoring end-to-end data workflows, ensuring system reliability, and proactively troubleshooting failures and performance bottlenecks. Key Skills & Qualifications: - Hands-on experience with Snowflake development and architecture. - Proficiency in SQL, Python, and cloud-native ETL/ELT tools. - Experience with AWS Glue, S3, and Snowflake integration. - Strong knowledge of data modeling, performance tuning, and cost optimization. - Familiarity with handling semi-structured data. - Good understanding of data governance, access control, and security best practices. - Excellent problem-solving and communication skills. Nice To Have: - Experience working with Big 4 consulting clients or large enterprise environments. - Exposure to DevOps practices, CI/CD pipelines, and data quality framework. If you are looking to leverage your expertise in Snowflake and cloud-based data warehousing to drive technical innovation and deliver scalable solutions, this role offers an exciting opportunity to grow your career and make a significant impact.,

Sr BA - Lead Bengaluru,Karnataka,India 5 years None Not disclosed On-site Full Time

We’re Hiring: Production Support Sr. Business Analyst / Lead Location: [Pune / Chennai ] | Experience: 5+ years | Industry: Insurance About the Role: We are looking for a Production Support Sr. Business Analyst / Lead to join our team and ensure smooth operations of our insurance policy administration system (ORIGAMI) . This role involves incident management, system troubleshooting, and driving enhancements to improve functionality and user experience. If you have strong insurance domain expertise , love solving complex problems, and enjoy working at the intersection of business and technology , this role is for you! What You’ll Do: ✅ Incident & Problem Management – Triage issues, analyze logs, identify root causes, and implement solutions to restore services quickly. ✅ Enhancements & Change Requests – Gather business requirements, work with configuration teams, and ensure successful delivery of system changes. ✅ Testing & UAT Support – Validate fixes and enhancements through functional testing and coordinate UAT activities with business teams. What We’re Looking For: ✔ 5+ years of Business Analyst experience in the insurance industry ✔ Solid knowledge of commercial insurance processes and policy lifecycle development ✔ Prior experience with PAS systems and rating programs ✔ Strong analytical skills and ability to recommend technical solutions ✔ Excellent communication & stakeholder management skills ✔ Hands-on experience with DevOps What Sets You Apart: ✅ Proactive & self-driven with a collaborative mindset ✅ Problem-solving attitude and ability to work independently If you’re passionate about delivering value, ensuring system stability, and enhancing user experience , we’d love to hear from you! 📩 Apply now or reach out via DM for more details.

Kafka Experts with Java bengaluru, karnataka 8 years None Not disclosed On-site Full Time

The Kafka Experts with Java role demands extensive experience in designing and implementing distributed event-streaming platforms using Apache Kafka technologies including Kafka Streams, Kafka Connect, Producer and Consumer APIs, and Kafka Cluster Management. Proficiency in Java Concurrency, Kafka Schema Registry, Avro Serialization, and Kafka Security protocols such as SASL/SSL is essential. The candidate must excel in Kafka performance tuning, transaction management, and monitoring tools while leveraging Java 8+ features. Strong problem-solving skills and system design thinking are critical to architect scalable, fault-tolerant streaming applications. Effective communication and collaboration with cross-functional teams ensure seamless enterprise-wide data integration. Analytical thinking, adaptability, and attention to detail support continuous improvement and operational excellence. Time management skills are necessary to meet project deadlines in a dynamic environment. This role requires guiding teams on best practices for event-driven architectures to drive innovation and reliability. Responsibilities: Demonstrate deep knowledge of Apache Kafka Streams, Kafka Connect, Kafka Producer and Consumer APIs, Java Concurrency, and Kafka Cluster Management to architect robust streaming solutions. Design and implement distributed event-streaming platforms ensuring high availability, scalability, and fault tolerance across enterprise systems. Build and maintain real-time data pipelines and streaming applications that meet business and technical requirements. Collaborate closely with architecture and engineering teams to integrate Kafka-based solutions into enterprise-wide data ecosystems. Guide and mentor development teams on best practices for event-driven architectures, Kafka security configurations, and schema management. Monitor Kafka clusters proactively using specialized tools to optimize performance, troubleshoot issues, and implement tuning strategies. Manage Kafka partitioning strategies and transactions to ensure data consistency and efficient resource utilization. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field is required.,Candidates must have between 8 to 16 years of professional experience in Kafka and Java development.,Proven experience in designing and implementing large-scale distributed streaming platforms is highly preferred. This long-term CTH opportunity for Kafka Experts with Java offers a chance to design and implement scalable, fault-tolerant event-streaming platforms using Apache Kafka and Java technologies. Candidates with strong problem-solving, system design, and collaboration skills will thrive. There is potential for conversion to a full-time role at Deloitte based on performance and project needs.

FlutterFlow Tech Leads bengaluru 8 - 16 years INR 1.82952 - 7.0 Lacs P.A. On-site Full Time

The FlutterFlow Tech Lead will spearhead mobile and web application development using FlutterFlow and Flutter, ensuring scalable and high-performance UI/UX designs. This role demands expertise in Dart programming, Firebase integration, API integration, custom widget development, and state management. The candidate will implement backend logic, integrate third-party plugins, and maintain version control through Git. Proficiency in CI/CD pipelines, performance optimization, debugging, cross-platform deployment, database schema design, and security best practices is essential. Strong team leadership and mentoring skills will drive best practices and foster junior developer growth. The role requires effective communication, stakeholder collaboration, and agile project management to deliver quality solutions on time. Time management and problem-solving abilities are critical to managing complex development cycles and ensuring project success. Responsibilities: Demonstrate deep knowledge of FlutterFlow UI Design, Dart Programming, Firebase Integration, API Integration, and Custom Widget Development to lead technical solutions effectively. Lead the end-to-end development of mobile and web applications using FlutterFlow and Flutter frameworks, ensuring adherence to best practices and coding standards. Design and implement scalable, responsive, and high-performance UI/UX solutions that meet business requirements and enhance user experience. Ensure code quality, security, reusability, and maintainability through rigorous code reviews, testing, and adherence to security best practices. Mentor and coach junior developers, promoting knowledge sharing, skill development, and adherence to agile methodologies within the team. Collaborate closely with stakeholders and cross-functional teams to align project goals, manage timelines, and deliver high-quality software solutions. Optimize application performance, troubleshoot issues, and implement CI/CD pipelines to streamline deployment and continuous integration processes. Qualifications: A Bachelor's degree in Computer Science, Information Technology, or a related field is required.,Candidates must have 8 to 16 years of professional experience in software development with a focus on FlutterFlow and Flutter technologies.,Proven experience in leading technical teams and managing complex mobile and web application projects is essential. We invite experienced FlutterFlow Tech Leads with 8-16 years of expertise in UI design, Dart programming, Firebase, API integration, and backend logic to lead innovative mobile and web app projects. This long-term role emphasizes code quality, security, mentoring, and agile leadership, with potential conversion to a full-time position at Deloitte based on performance.

Gen AI Developer bengaluru 1 - 2 years INR 3.0 - 5.625 Lacs P.A. On-site Part Time

The Gen AI Developer role requires hands-on experience in developing AI agents using Python, agentic AI frameworks, generative AI models, and large language models (LLMs). The candidate will apply problem-solving and critical thinking skills to design, implement, and optimize intelligent systems. Collaboration and effective communication are essential for working within cross-functional teams and translating complex AI concepts. Adaptability and creativity are vital to innovate and respond to evolving AI technologies. Time management skills ensure timely delivery of projects while balancing multiple tasks. Continuous learning is encouraged to stay updated with advancements in Gen AI and agentic AI domains. The developer will contribute to building scalable AI solutions that meet business objectives. This role demands a proactive approach to integrating AI agents into practical applications with measurable impact. Responsibilities: Demonstrate strong knowledge of Python programming, agentic AI, generative AI, and large language models to develop and deploy AI agents effectively. Design, develop, and maintain AI agents that perform autonomous tasks and enhance user interactions within various applications. Collaborate with data scientists, engineers, and product teams to integrate AI solutions seamlessly into existing systems. Analyze and troubleshoot AI agent performance issues, applying critical thinking to optimize algorithms and improve accuracy. Adapt to new AI technologies and frameworks, incorporating innovative approaches to improve agent capabilities. Qualifications: A Bachelor's degree in Computer Science, Artificial Intelligence, or a related field is required.,Candidates must have 1-2 years of professional experience developing AI agents using Python and generative AI technologies.,Prior experience working with large language models and agentic AI frameworks is essential for this role. We seek a Gen AI Developer with 1-2 years of experience who has demonstrable expertise in developing AI agents using Python, agentic AI, Gen AI, and LLMs. The ideal candidate will exhibit strong problem-solving, critical thinking, collaboration, and communication skills, alongside adaptability, creativity, time management, and a commitment to continuous learning. This role offers a unique opportunity to innovate and contribute to cutting-edge AI solutions.

Kafka Experts with Java bengaluru 8 years INR 2.1 - 6.75 Lacs P.A. On-site Full Time

The Kafka Experts with Java role demands extensive experience in designing and implementing distributed event-streaming platforms using Apache Kafka technologies including Kafka Streams, Kafka Connect, Producer and Consumer APIs, and Kafka Cluster Management. Proficiency in Java Concurrency, Kafka Schema Registry, Avro Serialization, and Kafka Security protocols such as SASL/SSL is essential. The candidate must excel in Kafka performance tuning, transaction management, and monitoring tools while leveraging Java 8+ features. Strong problem-solving skills and system design thinking are critical to architect scalable, fault-tolerant streaming applications. Effective communication and collaboration with cross-functional teams ensure seamless enterprise-wide data integration. Analytical thinking, adaptability, and attention to detail support continuous improvement and operational excellence. Time management skills are necessary to meet project deadlines in a dynamic environment. This role requires guiding teams on best practices for event-driven architectures to drive innovation and reliability. Responsibilities: Demonstrate deep knowledge of Apache Kafka Streams, Kafka Connect, Kafka Producer and Consumer APIs, Java Concurrency, and Kafka Cluster Management to architect robust streaming solutions. Design and implement distributed event-streaming platforms ensuring high availability, scalability, and fault tolerance across enterprise systems. Build and maintain real-time data pipelines and streaming applications that meet business and technical requirements. Collaborate closely with architecture and engineering teams to integrate Kafka-based solutions into enterprise-wide data ecosystems. Guide and mentor development teams on best practices for event-driven architectures, Kafka security configurations, and schema management. Monitor Kafka clusters proactively using specialized tools to optimize performance, troubleshoot issues, and implement tuning strategies. Manage Kafka partitioning strategies and transactions to ensure data consistency and efficient resource utilization. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field is required.,Candidates must have between 8 to 16 years of professional experience in Kafka and Java development.,Proven experience in designing and implementing large-scale distributed streaming platforms is highly preferred. This long-term CTH opportunity for Kafka Experts with Java offers a chance to design and implement scalable, fault-tolerant event-streaming platforms using Apache Kafka and Java technologies. Candidates with strong problem-solving, system design, and collaboration skills will thrive. There is potential for conversion to a full-time role at Deloitte based on performance and project needs.

Java Spring Boot Tech Leads bengaluru 8 - 12 years INR 1.82952 - 7.0 Lacs P.A. On-site Full Time

The Java Spring Boot Tech Lead will design and develop robust microservices leveraging the Spring Boot framework, ensuring seamless integration with RESTful APIs, databases, and middleware components. This role demands expertise in microservices architecture, Spring Security, Spring Data JPA, Hibernate ORM, and containerization technologies such as Docker and Kubernetes. The candidate will optimize application performance and scalability while managing CI/CD pipelines and cloud services on AWS or GCP. Strong proficiency in messaging systems like Kafka or RabbitMQ, along with SQL and NoSQL databases, is essential. The Tech Lead will provide technical leadership through code reviews, architectural decisions, and mentoring junior developers. Effective communication, stakeholder management, and agile methodologies will be critical to drive project success. Time management, decision-making, and problem-solving skills will ensure delivery excellence and team productivity. Responsibilities: Demonstrate deep knowledge of Spring Boot Framework, RESTful API Design, Microservices Architecture, Spring Security, and Spring Data JPA to lead technical development effectively. Design, develop, and maintain scalable microservices ensuring integration with REST APIs, databases, and middleware systems. Optimize enterprise application performance and scalability by applying best practices in coding, architecture, and infrastructure. Lead code reviews and make informed technical decisions to uphold high-quality standards and maintainable codebases. Mentor and coach junior developers to foster skill development and promote adherence to best practices and coding standards. Collaborate with cross-functional teams and stakeholders to align technical solutions with business goals using Agile methodologies. Manage CI/CD pipelines, container orchestration with Docker and Kubernetes, and cloud deployments on AWS or GCP to streamline delivery. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field is required.,A minimum of 8 to 12 years of professional experience in Java development with a focus on Spring Boot and microservices architecture is essential.,Proven experience in technical leadership roles within Agile environments is preferred. This long-term CTH opportunity for Java Spring Boot Tech Leads requires expertise in microservices design, RESTful APIs, Spring Security, and cloud technologies. Candidates will lead development, optimize performance, and mentor teams while working in Agile settings. There is potential for conversion to a full-time Deloitte position based on performance and project needs.

AWS Data Engineer bengaluru 5 - 8 years INR 5.3725 - 8.0 Lacs P.A. On-site Part Time

The AWS Data Engineer will design, develop, and maintain scalable data pipelines and architectures using AWS Glue, Amazon Redshift, AWS Lambda, and Amazon S3. This role requires expertise in AWS Kinesis, AWS CloudFormation, Apache Spark, Python, and AWS IAM to ensure secure and efficient data processing. The candidate will leverage AWS Data Pipeline, AWS Athena, Docker, Terraform, and Apache Airflow to automate workflows and optimize data integration. Proficiency in AWS CloudWatch, Scala, SQL, PySpark, CI/CD, and DevOps practices is essential for monitoring and continuous improvement. Strong problem-solving skills and critical thinking will drive innovative solutions to complex data challenges. Effective communication and collaboration are vital for working with cross-functional teams and stakeholders. Attention to detail and time management will ensure timely delivery of high-quality data solutions. Adaptability and project management skills will support evolving business needs and technology landscapes. Responsibilities: Demonstrate expert knowledge in AWS Glue, Amazon Redshift, AWS Lambda, Amazon S3, and Apache Spark to build and optimize data solutions. Design, implement, and maintain robust data pipelines and ETL processes using AWS Data Pipeline, Apache Airflow, and Python to support analytics and reporting. Develop infrastructure as code using AWS CloudFormation and Terraform to automate deployment and ensure scalable cloud environments. Monitor data workflows and system performance with AWS CloudWatch and implement CI/CD pipelines to enhance deployment efficiency and reliability. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights. Apply best practices in AWS IAM for secure access control and compliance across data platforms and services. Manage containerized applications using Docker and integrate DevOps methodologies to streamline development and operational processes. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field is required.,Candidates must have 5-8 years of professional experience in data engineering with a focus on AWS technologies.,Proven experience in designing and managing cloud-based data infrastructure and pipelines is essential. We invite skilled AWS Data Engineers with 5-8 years of experience and expertise in AWS Glue, Redshift, Lambda, and related technologies to join our team. Candidates should excel in problem solving, effective communication, and project management to drive innovative data solutions in a collaborative and dynamic environment.

Artificial Intelligence Engineer - LLM Models delhi,delhi,india 2 - 5 years INR Not disclosed On-site Full Time

Artificial Intelligence : 2-5 : Delhi Skills Were looking for sharp AI developers with hands-on experience in AI development, ideally with a strong foundation in Python (78/10 skill level) and practical exposure to AI/ML concepts and OpenAI agent development. Candidates should possess strong logical thinking, prompt engineering skills, and clear communication abilities. This role offers the opportunity to work on real-world AI agent applications in a fast-growing SaaS platform focused on sales and productivity tools for consulting and financial Full-time B.Tech/BE in Computer Science or related Youll Do : Build intelligent AI agents using OpenAI and LLMs. Design APIs and backend logic to integrate AI into SaaS workflows. Create AI-driven features in Python (Django/Flask). Work on CRM and third-party tool integrations. Continuously improve agent capabilities using prompt engineering and user Should Have : Strong Python coding skills (78/10 level). Hands-on experience with OpenAI APIs and LLMs. Understanding of AI/ML models and architectures. Prompt-writing capability with logical clarity. Familiarity with REST APIs, GitHub, and agile workflows. Bonus: Exposure to React, Mongo/PostgreSQL, GraphQL, or CRM systems. (ref:hirist.tech)