Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Title: Senior Java Developer (Remote) Experience: 6 to 8 Years Location: Remote Employment Type: Full-Time Notice period: Immediate Joiner Job Summary: We are looking for a highly skilled and experienced Senior Java Developer to join our distributed team. The ideal candidate should have a strong background in developing scalable enterprise-grade applications using Java and related technologies, with exposure to full-stack development, system integration, and performance optimization. Key Responsibilities: Design and develop high-performance, scalable, and reusable Java-based applications. Build RESTful APIs with a strong understanding of RESTful architecture. Implement enterprise integration patterns using Apache Camel or Spring Integration. Ensure application security in compliance with OWASP guidelines. Write and maintain unit, integration, and BDD tests using JUnit, Cucumber, Selenium. Conduct performance and load testing; optimize through memory and thread dump analysis. Collaborate with product owners, QA teams, and other developers in Agile/Scrum environments. Participate in code reviews, architecture discussions, and mentoring junior developers. Technical Skills & Experience Required: Core Backend: Strong proficiency in Java (8 or higher) Proficient in Spring Boot , Spring Security , Spring MVC , Spring Data Solid experience with REST API design, implementation, and testing using Postman , SoapUI Unit Testing , Integration Testing , BDD Testing Web Services and Integration: Experience with XML , Web Services (RESTful and SOAP) , Apache CXF Knowledge of Enterprise Integration Patterns Exposure to Apache Camel or Spring Integration Frontend & Full Stack: Familiarity with HTML5 , CSS3 Experience with TypeScript , JavaScript , jQuery , Node.js Working knowledge of Webpack and Gulp Database & Data Streaming: Strong in RDBMS and Database Design (e.g., Oracle , PL/SQL ) Exposure to MongoDB and NoSQL Understanding of Kafka architecture , Kafka as a data streaming platform Performance & Security: Experience in Performance Analysis and Application Tuning Understanding of Security aspects and OWASP guidelines Experience with Memory & Thread Dump Analysis Cloud & DevOps: Working knowledge of Kubernetes Familiarity with Elastic solutions at the enterprise level Experience in Identity and Access Management tools like ForgeRock About IGT Solutions: IGT Solutions is a next-gen customer experience (CX) company, defining and delivering transformative experiences for the global and most innovative brands using digital technologies. With the combination of Digital and Human Intelligence, IGT becomes the preferred partner for managing end-to-end CX journeys across Travel and High Growth Tech industries. We have a global delivery footprint, spread across 30 delivery centers in China, Colombia, Egypt, India, Indonesia, Malaysia, Philippines, Romania, South Africa, Spain, UAE, the US, and Vietnam, with 25000+ CX and Technology experts from 35+ nationalities. IGT's Digital team collaborates closely with our customers business & technology teams to take solutions faster to market while sustaining quality while focusing on business value and improving overall end-Customer Experience. Our offerings include industry solutions as well as Digital services. We work with leading global enterprise customers to improve synergies between business & technology by enabling rapid business value realization leveraging Digital Technologies. These include lifecycle transformation & rapid development / technology solution delivery services delivered leveraging traditional as well as Digital Technologies, deep functional understanding and software engineering expertise. IGT is ISO 27001:2013, CMMI SVC Level 5 and ISAE-3402 compliant for IT, and COPC® Certified v6.0, ISO 27001:2013 and PCI DSS 3.2 certified for BPO processes. The organization follows Six Sigma rigor for process improvements. It is our policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds. Show more Show less
Posted 3 days ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 days ago
15.0 - 20.0 years
17 - 22 Lacs
Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : Java Enterprise EditionMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of the projects you are involved in, ensuring that the applications you develop are efficient and effective in meeting user needs. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with Java Enterprise Edition.- Strong understanding of distributed computing principles.- Experience with data processing frameworks and tools.- Familiarity with cloud platforms and services. Additional Information:- The candidate should have minimum 5 years of experience in Apache Spark.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Kochi
Work from Office
Develop user-friendly web applications using Java and React.js while ensuring high performance. Design, develop, test, and deploy robust and scalable applications. Building and consuming RESTful APIs. Collaborate with the design and development teams to translate UI/UX design wireframes into functional components. Optimize applications for maximum speed and scalability. Stay up-to-date with the latest Java and React.js trends, techniques, and best practices. Participate in code reviews to maintain code quality and ensure alignment with coding standards. Identify and address performance bottlenecks and other issues as they arise. Help us shape the future of Event Driven technologies, including contributing to Apache Kafka, Strimzi, Apache Flink, Vert.x and other relevant open-source projects. Collaborate within a dynamic team environment to comprehend and dissect intricate requirements for event processing solutions. Translate architectural blueprints into actualized code, employing your technical expertise to implement innovative and effective solutions. Conduct comprehensive testing of the developed solutions, ensuring their reliability, efficiency, and seamless integration. Provide ongoing support for the implemented applications, responding promptly to customer inquiries, resolving issues, and optimizing performance. Serve as a subject matter expert, sharing insights and best practices related to product development, fostering knowledge sharing within the team. Continuously monitor the evolving landscape of event-driven technologies, remaining updated on the latest trends and advancements. Collaborate closely with cross-functional teams, including product managers, designers, and developers, to ensure a holistic and harmonious product development process. Take ownership of technical challenges and lead your team to ensure successful delivery, using your problem-solving skills to overcome obstacles. Mentor and guide junior developers, nurturing their growth and development by providing guidance, knowledge transfer, and hands-on training. Engage in agile practices, contributing to backlog grooming, sprint planning, stand-ups, and retrospectives to facilitate effective project management and iteration. Foster a culture of innovation and collaboration, contributing to brainstorming sessions and offering creative ideas to push the boundaries of event processing solutions. Maintain documentation for the developed solutions, ensuring comprehensive and up-to-date records for future reference and knowledge sharing. Involve in building and orchestrating containerized services Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Proven 5+ years of experience as aFull stack developer (Java and React.js) with a strong portfolio of previous projects. Proficiency in Java, JavaScript, HTML, CSS, and related web technologies. Familiarity with RESTfulAPIs and their integration into applications. Knowledge of modern CICD pipelines and tools like Jenkinsand Travis. Strong understanding of version control systems, particularly Git. Good communication skills and the ability to articulate technical concepts to both technical and non-technical team members. Familiarity with containerizationand orchestrationtechnologies like Docker and Kubernetes for deploying event processing applications. Proficiency in troubleshootingand debugging. Exceptional problem-solving and analytical abilities, with a knack for addressing technical challenges. Ability to work collaboratively in an agile and fast-paced development environment. Leadership skills to guide and mentorjunior developers, fostering their growth and skill development. Strong organizational and time management skills to manage multiple tasks and priorities effectively. Adaptability to stay current with evolving event-driven technologies and industry trends. Customer-focused mindset, with a dedication to delivering solutions that meet or exceed customer expectations. Creative thinking and innovation mindset to drive continuous improvement and explore new possibilities. Collaborative and team-oriented approach to work, valuing open communication and diverse perspectives. Preferred technical and professional ex
Posted 3 days ago
10.0 - 13.0 years
25 - 40 Lacs
Gurugram
Work from Office
We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total Experience 10+years. Strong working experience in Big Data Engineering. Expertise in AWS Glue (including Crawlers and Data Catalog). Hands-on experience in Python and PySpark for data engineering tasks. Strong working experience with Terraform and/or CloudFormation. Strong experience with Snowflake including data loading, transformations, and querying. Expertise in CI/CD pipelines, preferably using GitHub Actions. Strong working knowledge of AWS services like S3, SNS, Secret Manager, Athena, and Lambda. Familiarity with JIRA and GitHub for Agile project management and version control. Excellent problem-solving skills and ability to resolve complex functional issues independently. Strong documentation skills, including creation of configuration guides, test scripts, and user manuals. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client's business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the clients' requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it. Understanding and relating technology integration scenarios and applying these learnings in projects
Posted 3 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a talented Full Stack Developer experienced in Java, Kotlin, Spring Boot, Angular, and Apache Kafka to join our dynamic engineering team. The ideal candidate will design, develop, and maintain end-to-end web applications and real-time data processing solutions, leveraging modern frameworks and event-driven architectures. Location : Noida/Pune/Bangalore/Hyderabad/Chennai Timings : 2pm to 11pm Experience : 4-6 Years Key Responsibilities Design, develop, and maintain scalable web applications using Java, Kotlin, Spring Boot, and Angular. Build and integrate RESTful APIs and microservices to connect frontend and backend components. Develop and maintain real-time data pipelines and event-driven features using Apache Kafka. Collaborate with cross-functional teams (UI/UX, QA, DevOps, Product) to define, design, and deliver new features. Write clean, efficient, and well-documented code following industry best practices and coding standards. Participate in code reviews, provide constructive feedback, and ensure code quality and consistency. Troubleshoot and resolve application issues, bugs, and performance bottlenecks in a timely manner. Optimize applications for maximum speed, scalability, and security. Stay updated with the latest industry trends, tools, and technologies, and proactively suggest improvements. Participate in Agile/Scrum ceremonies and contribute to continuous integration and delivery pipelines. Required Qualifications Experience with cloud-based technologies and deployment (Azure, GCP). Familiarity with containerization (Docker, Kubernetes) and microservices architecture. Proven experience as a Full Stack Developer with hands-on expertise in Java, Kotlin, Spring Boot, and Angular (Angular 2+). Strong understanding of object-oriented and functional programming principles. Experience designing and implementing RESTful APIs and integrating them with frontend applications. Proficiency in building event-driven and streaming applications using Apache Kafka. Experience with database systems (SQL/NoSQL), ORM frameworks (e.g., Hibernate, JPA), and SQL. Familiarity with version control systems (Git) and CI/CD pipelines. Good understanding of HTML5, CSS3, JavaScript, and TypeScript. Experience with Agile development methodologies and working collaboratively in a team environment. Excellent problem-solving, analytical, and communication skills. Show more Show less
Posted 3 days ago
14.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
General Information Job Role: Lead DevOps Engineer Functional Area: DevOps Job Location: Pan India Job Shift: General Indian/ UK Shift Education: B.Sc./ B.Tech/ B.E / MTech in Any Specialization Employment Type Full Time, Permanent About Unified Infotech Embark on a transformative journey with Unified Infotech, a beacon of innovation and excellence in the tech consulting and software development landscape for over 14 years. We are dedicated to designing custom, forward-thinking web, mobile, and software solutions for a diverse clientele, from burgeoning MSMEs to towering Enterprises. Our mission is to engineer products that not only solve complex challenges but also set new benchmarks in the digital realm. At Unified, a job is not simply a job. It is a pursuit of excellence, to build and create, to understand and consult, to imagine and be creative, to reformulate UX, to invent and redefine, to code for performance, to collaborate and communicate. Role Description We are seeking a highly skilled and motivated DevOps Lead with expertise in both AWS and Azure cloud platforms to join our dynamic team. The successful candidate will collaborate with solution architects, developers, project managers, customer technical teams, and internal stakeholders to drive results. Your primary focus will be ensuring seamless customer access to applications in the cloud, managing customer workload migrations, implementing robust backup policies, overseeing hybrid cloud deployments, and building solutions for service assurance with a strong emphasis on leveraging Azure's unique capabilities. Desired Experience Define architecture, design, implement, program manage, and lead technology teams in delivering complex technical solutions for our clients across both AWS and Azure platforms. Span across DevOps, Continuous Integration (CI), and Continuous Delivery (CD) areas, providing demonstrable implementation experience in shaping value-add consulting solutions. Deploy, automate, and maintain cloud infrastructure with leading public cloud vendors such as Amazon Web Services (AWS) and Microsoft Azure, with a keen focus on integrating Azure-specific services and tools. Set up backups, replications, archiving, and implement disaster recovery measures leveraging Azure's resilience and geo-redundancy features. Utilize Azure DevOps services for better collaboration, reporting, and increasing automation in the CI/CD pipelines. Job Requirements Detail-oriented with a holistic perspective on system architecture, including at least 1 year of hands-on experience with Azure cloud services. Strong shell scripting and Linux administration skills, with a deep understanding of Linux and virtualization. Expertise in server technologies like Apache, Nginx, and Node, including optimization experience. Knowledge of database technologies such as MySQL, Redis, and MongoDB, with proficiency in management, replication, and disaster recovery. Proven experience in medium to large-scale public cloud deployments on AWS and Azure, including the migration of complex, multi-tier applications to these platforms. In-depth working knowledge of AWS and Azure, showcasing the ability to leverage Azure-specific features such as Azure Active Directory, Azure Kubernetes Service (AKS), Azure Functions, and Azure Logic Apps. Familiarity with CI/CD, automation, and monitoring processes for production-level infrastructure, including the use of Azure Monitor and Azure Automation and third party . Practical experience in setting up full-stack monitoring solutions using Prometheus, Grafana, and Loki, including long-term storage, custom dashboard creation, alerting, and integration with Kubernetes clusters. Worked extensively with Azure Front Door , including custom routing, WAF policies, SSL/TLS certificate integration, and performance optimization for global traffic. Experienced in multi Ingress Controller architecture setup and management, including namespace-specific ingress deployments Hands-on experience in setting up, configuring, and managing Azure API Management (APIM) Deep understanding of system performance and the ability to analyze root causes using tools available in Azure. Experience with Azure-specific management and governance tools, such as Azure Policy, Azure Blueprints, and Azure Resource Manager (ARM) templates. Proficiency in CI/CD automation using tools like Jenkins, Travis CI, Circle CI, or Azure DevOps. Knowledge of security infrastructure and vulnerabilities, including Azure's security tools like Azure Security Center and Azure Sentinel. Capability to analyze costs for the entire infrastructure, including cost management and optimization in Azure environments. Hands-on experience with configuration management tools like Ansible, Puppet, Chef, or similar, with an emphasis on their integration in Azure environments. Experience with container orchestration tools such as Kubernetes, Docker Swarm, and Docker containers, with a preference for those proficient in Azure Kubernetes Service (AKS). Total Exp : 6+ Years Exp in Cloud : AWS 3+, Azure 1+ Years NP : Immediate to 30 days preferred. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Provide support for data production systems in Nielsen Technology International Media (Television and Radio audio measurement) playing a critical role in ensuring the reliability, scalability, and security. Configure, implement, and deploy audience measurement solutions. Provide expert-level support, leading infrastructure automation initiatives, driving continuous improvement across our DevOps practices and supporting Agile processes. Core Technologies: Linux, Airflow, Bash, CI/CD, AWS services (EC2, S3, RDS, EKS, VPC), PostgreSQL, Python, Kubernetes. Responsibilities: Architect, manage, and optimize scalable and secure cloud infrastructure (AWS) using Infrastructure as Code (Terraform, CloudFormation, Ansible). Implement and maintain robust CI/CD pipelines to streamline software deployment and infrastructure changes. Identify and implement cost-optimization strategies for cloud resources. Ensure the smooth operation of production systems across 30+ countries, providing expert-level troubleshooting and incident response. Manage cloud-related migration changes and updates, supporting the secure implementation of changes/fixes. Participate in 24/7 on-call rotation for emergency support. Key Skills: Proficiency in Linux OS, particularly Fedora and Debian-based distributions (AlmaLinux, Amazon Linux, Ubuntu). Strong proficiency in scripting languages (Bash, Python) and SQL. Knowledge of scripting languages (Bash, SQL). Versed in leveraging Automations / DevOps principles with understanding of CI/CD concepts Working knowledge of infrastructure-as-code tools like Terraform, CloudFormation, and Ansible. Solid experience with AWS core services (EC2, EKS, S3, RDS, VPC, IAM, Security Groups). Hands-on experience with Docker, Kubernetes for containerized workloads. Solid understanding of DevOps practices, including monitoring, security, and high-availability design. Hands-on experience with Apache Airflow for workflow automation and scheduling. Strong troubleshooting skills, with experience in resolving issues and handling incidents in production environments. Foundational understanding of modern networking principles and cloud network architectures. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Provide support for data production systems in Nielsen Technology International Media (Television and Radio audio measurement) playing a critical role in ensuring the reliability, scalability, and security. Configure, implement, and deploy audience measurement solutions. Provide expert-level support, leading infrastructure automation initiatives, driving continuous improvement across our DevOps practices and supporting Agile processes. Core Technologies: Linux, Airflow, Bash, CI/CD, AWS services (EC2, S3, RDS, EKS, VPC), PostgreSQL, Python, Kubernetes. Responsibilities: Architect, manage, and optimize scalable and secure cloud infrastructure (AWS) using Infrastructure as Code (Terraform, CloudFormation, Ansible). Implement and maintain robust CI/CD pipelines to streamline software deployment and infrastructure changes. Identify and implement cost-optimization strategies for cloud resources. Ensure the smooth operation of production systems across 30+ countries, providing expert-level troubleshooting and incident response. Manage cloud-related migration changes and updates, supporting the secure implementation of changes/fixes. Participate in 24/7 on-call rotation for emergency support. Key Skills: Proficiency in Linux OS, particularly Fedora and Debian-based distributions (AlmaLinux, Amazon Linux, Ubuntu). Strong proficiency in scripting languages (Bash, Python) and SQL. Knowledge of scripting languages (Bash, SQL). Versed in leveraging Automations / DevOps principles with understanding of CI/CD concepts Working knowledge of infrastructure-as-code tools like Terraform, CloudFormation, and Ansible. Solid experience with AWS core services (EC2, EKS, S3, RDS, VPC, IAM, Security Groups). Hands-on experience with Docker, Kubernetes for containerized workloads. Solid understanding of DevOps practices, including monitoring, security, and high-availability design. Hands-on experience with Apache Airflow for workflow automation and scheduling. Strong troubleshooting skills, with experience in resolving issues and handling incidents in production environments. Foundational understanding of modern networking principles and cloud network architectures. Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Udaipur, Rajasthan, India
On-site
Requirements: 7+ years of hands-on Python development experience Proven experience designing and leading scalable backend systems Expert knowledge of Python and at least one framework (e.g., Django, Flask) Familiarity with ORM libraries and server-side templating (Jinja2, Mako, etc.) Strong understanding of multi-threading, multi-process, and event-driven programming Proficient in user authentication, authorization, and security compliance Skilled in frontend basics: JavaScript, HTML5, CSS3 Experience designing and implementing scalable backend architectures and microservices Ability to integrate multiple databases, data sources, and third-party services Proficient with version control systems (Git) Experience with deployment pipelines, server environment setup, and configuration Ability to implement and configure queueing systems like RabbitMQ or Apache Kafka Write clean, reusable, testable code with strong unit test coverage Deep debugging skills and secure coding practices ensuring accessibility and data protection compliance Optimize application performance for various platforms (web, mobile) Collaborate effectively with frontend developers, designers, and cross-functional teams Lead deployment, configuration, and server environment efforts Show more Show less
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Udaipur, Rajasthan, India
Remote
At GKM IT , we’re passionate about building seamless digital experiences powered by robust and intelligent data systems. We’re on the lookout for a Data Engineer - Senior II to architect and maintain high-performance data platforms that fuel decision-making and innovation. If you enjoy designing scalable pipelines, optimising data systems, and leading with technical excellence, you’ll thrive in our fast-paced, outcome-driven culture. You’ll take ownership of building reliable, secure, and scalable data infrastructure—from streaming pipelines to data lakes. Working closely with engineers, analysts, and business teams, you’ll ensure that data is not just available, but meaningful and impactful across the organization. Requirements 5 to 7 years of experience in data engineering Architect and maintain scalable, secure, and reliable data platforms and pipelines Design and implement data lake/data warehouse solutions such as Redshift, BigQuery, Snowflake, or Delta Lake Build real-time and batch data pipelines using tools like Apache Airflow, Kafka, Spark, and DBT Ensure data governance, lineage, quality, and observability Collaborate with stakeholders to define data strategies, architecture, and KPIs Lead code reviews and enforce best practices Mentor junior and mid-level engineers Optimize query performance, data storage, and infrastructure Integrate CI/CD workflows for data deployment and automated testing Evaluate and implement new tools and technologies as required Demonstrate expert-level proficiency in Python and SQL Possess deep knowledge of distributed systems and data processing frameworks Be proficient in cloud platforms (AWS, GCP, or Azure), containerization, and CI/CD processes Have experience with streaming platforms like Kafka or Kinesis and orchestration tools Be highly skilled with Airflow, DBT, and data warehouse performance tuning Exhibit strong leadership, communication, and mentoring skills Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated. Show more Show less
Posted 3 days ago
3.0 - 5.0 years
17 - 30 Lacs
Bengaluru
Work from Office
Designation Computer Vision Scientist (Geospatial Analysis) Location - Bangalore Position - Full-Time (No Hybrid) Salary – Competitive Company QueNext About Us AI ML Startup set up in 2015 which is in the domain of power, banking, agriculture sector etc. and working directly with Karnataka Govt to offer insights and decisions. Company website: - https://quenext.com/ About Us AI ML Startup set up in 2015 which is in the domain of power, banking, agriculture sector etc. and working directly with Karnataka Govt to offer insights and decisions. You will be part of the team that is delivering large transformational projects to energy utilities, banks and government bodies utilizing our in-house patented AI driven products. You will be working on cutting edge geospatial technologies with a steep learning curve and working along-side the founders from Indian Statistical Institute (ISI) and INSEAD and a team of data scientists and programmers in a collaborative and innovative environment. You should have a strong understanding and interest in remote sensing and coding. A superior academic record at a leading university in Computer Science, Data Science and Technology, Geoinformatics, Mathematics, Statistics, or a related field or equivalent work experience is preferable. Job Description: We are seeking a highly skilled and innovative Computer Vision Scientist to join us team. The ideal candidate will have expertise in geographic information systems (GIS), hands-on experience with Apache Kafka, TensorFlow, and machine learning (ML) pipelines, and a strong background in computer vision. You will bring your familiarity and experience with satellite and aerial imagery, geospatial APIs and libraries, knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) and Understanding of Big Data frameworks (e.g., Spark, Hadoop) in geospatial contexts. This role is ideal for someone passionate about developing advanced geospatial applications, integrating cutting-edge technologies, and solving complex spatial data challenges. Key Responsibilities: GIS Development: • Design, develop, and implement GIS-based applications and services. • Create, manipulate, and analyze geospatial data • Integrate geospatial data into larger software systems Machine Learning Pipelines: • Build, optimize, and deploy ML pipelines for geospatial and computer vision tasks • Leverage TensorFlow to create models for spatial analysis, object detection, and image classification • Implement ML workflows from data ingestion to deployment and monitoring Kafka Integration: • Develop real-time data streaming and processing workflows using Apache Kafka • Design event-driven systems for geospatial and computer vision applications • Ensure scalability, reliability, and efficiency in Kafka-based pipelines Computer Vision Applications: • Apply computer vision techniques to geospatial data, satellite imagery, and aerial photography • Develop and deploy models for tasks like feature extraction, land-use classification, and object recognition • Stay updated on advancements in CV to enhance project outcomes Collaboration and Documentation: • Collaborate with cross-functional teams, including data scientists, software engineers, and GIS analysts • Document workflows, processes, and technical details for easy replication and scalability • Provide technical support and troubleshooting for GIS and ML-related challenges Technical Skills: • Proficiency in GIS tools • Strong expertise in Apache Kafka for real-time data streaming. • Experience with TensorFlow, Keras, or PyTorch for ML model development • Knowledge of machine learning pipelines and tools (e.g., Kubeflow, Airflow). • Hands-on experience with computer vision techniques and libraries (e.g., OpenCV, TensorFlow Object Detection API). • Strong programming skills in Python, Java, or C++. • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) for ML and GIS deployment. • Knowledge of geospatial data formats (e.g., GeoJSON, Shapefiles, Raster). s INTERESTED CANDIDATES CAN SHARE THEIR UPDATED CV ON nawaz@stellaspire.com
Posted 3 days ago
0.0 - 5.0 years
0 Lacs
Kochi, Kerala
On-site
Job Description Highly skilled Laravel developer with a minimum of 4-5 year of Laravel experience well-versed with current web technologies and use of cutting-edge tools and 3rd party API's. Strong knowledge of PHP, MySQL, HTML, CSS, JavaScript, and MVC architecture Most important thing should have experience with custom e-commerce website Familiarity with modern JavaScript frameworks like Vue.js, React, or Angular Responsibilities & Duties Design, develop, test, deploy and support new software solutions and changes to existing software solutions. Translate Business Requirements into components of complex, loosely-coupled, distributed systems. Responsible for creating REST based web services and APIs for consumption by mobile and web platforms. Responsible systems analysis, code creation, testing, build/release and technical support. Responsible for keeping excellent, organized project records and documentation. You strive for innovative solutions, quality code with on time delivery. Manages multiple projects with timely deadlines. Required Experience, Skills and Qualifications: Working experience in Laravel Framework, at least done few project in Laravel or minimum 3-4 year of Laravel development experience. Working knowledge of HTML5, CSS3, and AJAX/ JavaScript, jQuery or similar libraries. Experience in application development in the LAMP stack (Linux, Apache, MySQL, and PHP) environment. Good working knowledge of object-oriented PHP (OOPs) & MVC frameworks. Must know Laravel coding standards and best practices. Must have working experience with Web service technologies such as REST, JSON etc., and writing REST APIs for consumption by mobile and web platforms. Working knowledge of GIT version control. Exposure to Responsive Web design. Strong unit testing and debugging skills. Good experience with databases (MySQL) and query writing. Excellent teamwork and problem-solving skills, flexibility, and ability to handle multiple tasks. Hands-on experience with project management tools like Desk log, Jira, or Asana Understanding of server-side security, performance optimization, and cross-browser compatibility Experience deploying applications on cloud platforms (AWS, Azure, or similar) is a plus How to Apply: Interested candidates are invited to submit their resume and a cover letter detailing your relevant experience and achievements to hr.kochi@mightywarner.com . Please include “ Laravel Developer” in the subject line. Job Type: Full-time Pay: ₹35,000.00 - ₹45,000.00 per month Benefits: Provident Fund Schedule: Day shift Monday to Friday Ability to commute/relocate: Kochi, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join immediately ? Custom Development experience do you have ? Do you have E-commerce Developing experience ? Experience: Laravel Developer: 5 years (Preferred) Work Location: In person Expected Start Date: 22/06/2025
Posted 3 days ago
4.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. The world's top banks use Zafin's integrated platform to drive transformative customer value. Powered by an innovative AI-powered architecture, Zafin's platform seamlessly unifies data from across the enterprise to accelerate product and pricing innovation, automate deal management and billing, and create personalized customer offerings that drive expansion and loyalty. Zafin empowers banks to drive sustainable growth, strengthen their market position, and define the future of banking centered around customer value. Zafin is privately owned and operates out of multiple global locations including North America, Europe, and Australia. Zafin is backed by significant financial partners committed to accelerating the company's growth, fueling innovation and ensuring Zafin's enduring value as a leading provider of cloud-based solutions to the financial services sector. Zafin is proud to be recognized as a top employer. In Canada, UK and India, we are certified as a "Great Place to Work". The Great Place to Work program recognizes employers who invest in and value their people and organizational culture. The company's culture is driven by strategy and focused on execution. We make and keep our commitments. What is the opportunity? This role is at the intersection of banking and analytics. It requires diving deep into the banking domain to understand and define the metrics, and into the technical domain to implement and present the metrics through business intelligence tools. We're building a next-generation analytics product to help banks maximize the financial wellness of their clients. The product is ambitious - that's why we're looking for a team member who is laterally skilled and comfortable with ambiguity. Reporting to the Senior Vice President, Analytics as part of the Zafin Product Team, you are a data-visualization subject matter expert who can define and implement the insights to be embedded in the product using data visualization tools (DataViz) and applying analytics expertise to make an impact. If storytelling with data is a passion of yours, and data visualization and analytics expertise is what has enabled you to reach your current level in your career - you should take a look at how we do it on one of the most advanced banking technology products in the market today - connect with us to learn more. Location – Chennai or Trivandrum India Purpose of the Role As a Software Engineer – APIs & Data Services, you will own the "last mile" that transforms data pipelines into polished, product-ready APIs and lightweight microservices. Working alongside data engineers and product managers, you will deliver features that power capabilities like Dynamic Cohorts, Signals, and our GPT-powered release notes assistant. What You'll Build & Run Approximate Focus: 60% API / 40% Data Focus Area Typical Work Product-Facing APIs Design REST/GraphQL endpoints for cohort, feature-flag, and release-notes data. Build microservices in Java/Kotlin (Spring Boot or Vert.x) or Python (FastAPI) with production-grade SLAs. Schema & Contract Management Manage JSON/Avro/Protobuf schemas, generate client SDKs, and enforce compatibility through CI/CD pipelines. Data-Ops Integration Interface with Delta Lake tables in Databricks using Spark/JDBC. Transform datasets with PySpark or Spark SQL and surface them via APIs. Pipeline Stewardship Extend Airflow 2.x DAGs (Python), orchestrate upstream Spark jobs, and manage downstream triggers. Develop custom operators as needed. DevOps & Quality Manage GitHub Actions, Docker containers, Kubernetes manifests, and Datadog dashboards to ensure service reliability. LLM & AI Features Enable prompt engineering and embeddings exposure via APIs; experiment with tools like OpenAI, LangChain, or LangChain4j to support product innovation. About You You're a language-flexible engineer with a solid grasp of system design and the discipline to ship robust, well-documented, and observable software. You're curious, driven, and passionate about building infrastructure that scales with evolving product needs. Mandatory Skills 4 to 6 years of professional experience in Java (11+) and Spring Boot Solid command of API design principles (REST, OpenAPI, GraphQL) Proficiency in SQL databases Experience with Docker, Git, and JUnit Hands-on knowledge of low-level design (LLD) and system design fundamentals Highly Preferred / Optional Skills Working experience with Apache Airflow Familiarity with cloud deployment (e.g., Azure AKS, GCP, AWS) Exposure to Kubernetes and microservice orchestration Frontend/UI experience in any modern framework (e.g., React, Angular) Experience with Python (FastAPI, Flask) Good-to-Have Skills CI/CD pipeline development using GitHub Actions Familiarity with code reviews, HLD, and architectural discussions Experience integrating with LLM APIs like OpenAI and building prompt-based systems Exposure to schema validation tools such as Pydantic, Jackson, Protobuf Monitoring and alerting with Datadog, Prometheus, or equivalent What's in it for you Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement. Want to learn more about what you can look forward to during your career with us? Visit our careers site and our openings: zafin.com/careers Zafin welcomes and encourages applications from people with disabilities. Accommodations are available on request for candidates taking part in all aspects of the selection process. Zafin is committed to protecting the privacy and security of the personal information collected from all applicants throughout the recruitment process. The methods by which Zafin contains uses, stores, handles, retains, or discloses applicant information can be accessed by reviewing Zafin's privacy policy at https://zafin.com/privacy-notice/. By submitting a job application, you confirm that you agree to the processing of your personal data by Zafin described in the candidate privacy notice. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members. Show more Show less
Posted 3 days ago
5.0 - 10.0 years
0 Lacs
Cochin
On-site
Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Data Engineer Locations- Kochi/Chennai/Coimbatore/Mumbai/Pune/Hyderabad Job Overview : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have deep expertise in Azure Databricks and Python, and experience building scalable data pipelines. Familiarity with Data Fabric architectures is a plus. You'll work closely with data scientists, analysts, and business stakeholders to deliver robust data solutions that drive insights and innovation. Key Responsibilities: Design, build, and maintain large-scale, distributed data pipelines using Azure Databricks and Py Spark. Design, build, and maintain large-scale, distributed data pipelines using Azure Data Factory Develop and optimize data workflows and ETL processes in Azure Cloud environments. Write clean, maintainable, and efficient code in Python for data engineering tasks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. • Monitor and troubleshoot data pipelines for performance and reliability issues. • Implement data quality checks, validations, and ensure data lineage and governance. Contribute to the design and implementation of a Data Fabric architecture (desirable). Required Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5–10 years of experience in data engineering or related roles. • Expertise in Azure Databricks, Delta Lake, and Spark. • Strong proficiency in Python, especially in a data processing context. Experience with Azure Data Lake, Azure Data Factory, and related Azure services. Hands-on experience in building data ingestion and transformation pipelines. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Good to Have: Experience or understanding of Data Fabric concepts (e.g., data virtualization, unified data access, metadata-driven architectures). • Knowledge of modern data warehousing and lakehouse principles. • Exposure to tools like Apache Airflow, dbt, or similar. Experience working in agile/scrum environments. DP-500 and DP-600 Certifications What We Offer: Competitive salary and performance-based bonuses. Flexible work arrangements. Opportunities for continuous learning and career growth. A collaborative, inclusive, and innovative work culture. www.orioninc.com (21) Orion Innovation: Company Page Admin | LinkedIn Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC and its subsidiaries and its affiliates (collectively, "Orion," "we" or "us") are committed to protecting your privacy. This Candidate Privacy Policy (orioninc.com) ("Notice") explains: What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.
Posted 3 days ago
3.0 years
0 Lacs
India
Remote
About CuringBusy: CuringBusy is a Fully Remote company , providing subscription-based, remote Executive Assistant services to busy Entrepreneurs, Business owners, and Professionals across the globe. We help entrepreneurs free up their time by outsourcing their everyday, routine admin work like calendar management, email, customer service, and marketing tasks like social media, digital marketing, website management, etc. Job Role : The Digital Marketing Specialist is responsible for developing, implementing, and managing website and marketing strategies that promote products and services across multiple digital channels. This includes creating campaigns and driving digital marketing initiatives on search engine marketing, email marketing, display advertising, website creation & optimization, paid social media, email, and mobile marketing. This role will develop the digital marketing plan and coordinate with the sales, product, content, and other teams to ensure the successful execution of the campaigns . Responsibilities: ● Develop effective digital marketing plans to drive our products/services awareness that align with the company's business needs. ● Website development on WordPress. ● Manage the Search Engine Marketing (SEM), Display Advertising, Website Optimization & Conversion Rate Optimization efforts. ● Lead paid social media strategies & campaigns (LinkedIn, Facebook & Instagram) and identify opportunities to leverage emerging platforms. ● Manage email campaigns including segmentation strategies & automation pieces. ● Provide reporting on the various online performance KPIs such as CTRs, CPMs & CPCs. ● Design, build, and maintain our social media presence. ● Design, and manage Social media and digital marketing Advertising campaigns and implement social media strategy to align with business goals. ● Measures and reports the performance of all digital marketing campaigns and assesses against goals (ROI and KPIs). ● Utilizes strong analytical ability to evaluate end-to-end customer experience across multiple channels and customer touchpoints. Job Qualifications and Skill Sets: ● Bachelor’s or master’s degree in Digital Marketing. ● Demonstrable 3+ years of experience leading and managing SEO/SEM, marketing database, email, social media, and display advertising campaigns. ● Highly creative with experience in identifying target audiences and devising digital campaigns that engage, inform, and motivate ● Experience in optimizing landing pages and user funnels. ● Proficiency in graphic design software including Adobe Photoshop, Adobe Illustrator, and other visual design tools. ● Knowledge of both front-end and back-end languages. ● Familiarity with databases (e.g. MySQL, MongoDB), web servers (e.g. Apache), and UI/UX design ● Solid knowledge of website and marketing analytics tools (e.g., Google Analytics, NetInsight, Omniture, WebTrends, SEMRush, etc.) ● Experienced in any of the Website Platforms: WordPress, Wix, Shopify, WooCommerce, PrestaShop, and Squarespace. ● Experience with advertisement tools (e.g., Google Ads, Facebook Ads, Bing Ads, Instagram Ads, YouTube ads, etc.) ● Knowledge of Software like Mailerlite, Mailchimp, Sendinblue, Sender, Hubspot email marketing, Omnisend, Sendpulse, Mailjet, Moosend, etc. ● Proficient in marketing research and statistical analysis. Your Benefits ● Work from Home Job/Completely Remote. ● Opportunity to grow with a Fast-Growing Startup. ● Exposure to International Clients. Work Timings: Evening Shift or Night Shift 3 pm-12 am/6 pm-3 am ( Monday- Friday) Salary: Based on company standards and skill sets. Job Type: Full-time Pay: As per Industry Standards Show more Show less
Posted 3 days ago
0 years
0 Lacs
Greater Bengaluru Area
On-site
We are looking for skilled ETL pipeline support engineer to join DevOps team. In this role, you will be ensuring the smooth operation of PROD ETL pipelines. Also responsible for monitoring, troubleshooting existing pipelines. This role requires a strong understanding of SQL, Spark, and experience with AWS Glue and Redshift . Required Skills and Experience: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience in supporting and maintaining ETL pipelines. Strong proficiency in SQL and experience with relational databases (e.g., Redshift). Solid understanding of distributed computing concepts and experience with Apache Spark. Hands-on experience with AWS Glue and other AWS data services (e.g., S3, Lambda). Experience with data warehousing concepts and best practices. Excellent problem-solving, analytical skills and strong communication and collaboration skills. Ability to work independently and as part of a team. Preferred Skills and Experience: Experience with other ETL tools and technologies Experience with scripting languages (e.g., Python). Familiarity with Agile development methodologies. Experience with data visualization tools (e.g., Tableau, Power BI). Roles & Responsibilities Monitor and maintain existing ETL pipelines, ensuring data quality and availability. identify and resolve pipeline issues and data errors. Troubleshoot data integration processes. If needed, collaborate with data engineers and other stakeholders to resolve complex issues Develop and maintain necessary documentation for ETL processes and pipelines. Participate in on-call rotation for production support. Show more Show less
Posted 3 days ago
1.0 years
11 - 13 Lacs
Hyderābād
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 3 days ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 3 days ago
0 years
0 Lacs
India
On-site
Job Title: PHP Intern (Full Stack Preferred) Location: Laxmi Nagar Employment Type: Internship / Entry-Level Experience: Freshers / Interns Job Description We are looking for a highly motivated PHP Intern with a strong foundational knowledge of website design and development, a creative mindset, and a willingness to learn and grow in a fast-paced environment. As part of our cross-functional development team, you will assist in building scalable software solutions and contribute across all stages of the software development life cycle — from ideation to deployment. Full Stack developers will be given preference. Freshers and interns with a strong learning attitude and technical base are encouraged to apply. Key Responsibilities Assist in the creation and implementation of various web-based applications and platforms. Work on development tasks using Core PHP, LAMP stack, WordPress, Magento, and other CMSs. Support integration of third-party APIs and external systems. Help design intuitive, user-friendly front-end experiences using HTML5, CSS3, JavaScript, jQuery, and AJAX. Work alongside senior developers on Shopify, React, Flutter, and other latest tech stacks. Collaborate on database design and management using MySQL or NoSQL. Participate in DevOps processes and deployment via Nginx, Apache, and AWS. Utilize version control and collaboration through GitHub. Stay current with new technologies and industry trends to improve performance and usability. Preferred Skills & Qualifications Basic experience or academic knowledge in PHP and Full Stack Development. Familiarity with CMS platforms like WordPress, Magento, and Shopify. Understanding of front-end frameworks and responsive design principles. Exposure to cloud services like AWS is a plus. Good analytical, debugging, and problem-solving skills. Job Type: Full-time Pay: ₹5,000.00 per month Schedule: Day shift Work Location: In person Application Deadline: 22/06/2025 Expected Start Date: 22/06/2025
Posted 3 days ago
5.0 years
0 Lacs
Gurgaon
Remote
About Us: At apexanalytix, we’re lifelong innovators! Since the date of our founding nearly four decades ago we’ve been consistently growing, profitable, and delivering the best procure-to-pay solutions to the world. We’re the perfect balance of established company and start-up. You will find a unique home here. And you’ll recognize the names of our clients. Most of them are on The Global 2000. They trust us to give them the latest in controls, audit and analytics software every day. Industry analysts consistently rank us as a top supplier management solution, and you’ll be helping build that reputation. Read more about apexanalytix - https://www.apexanalytix.com/about/ Job Details The Role Quick Take - We are looking for a highly skilled systems engineer with experience working with Virtualization, Linux, Kubernetes, and Server Infrastructure. The engineer will be responsible to design, deploy, and maintain enterprise-grade cloud infrastructure using Apache CloudStack or similar technology, Kubernetes on Linux operating system. The Work - Hypervisor Administration & Engineering Architect, deploy, and manage Apache CloudStack for private and hybrid cloud environments. Manage and optimize KVM or similar virtualization technology Implement high-availability cloud services using redundant networking, storage, and compute. Automate infrastructure provisioning using OpenTofu, Ansible, and API scripting. Troubleshoot and optimize hypervisor networking (virtual routers, isolated networks), storage, and API integrations. Working experience with shared storage technologies like GFS and NFS. Kubernetes & Container Orchestration Deploy and manage Kubernetes clusters in on-premises and hybrid environments. Integrate Cluster API (CAPI) for automated K8s provisioning. Manage Helm, Azure Devops, and ingress (Nginx/Citrix) for application deployment. Implement container security best practices, policy-based access control, and resource optimization. Linux Administration Configure and maintain RedHat HA Clustering (Pacemaker, Corosync) for mission-critical applications. Manage GFS2 shared storage, cluster fencing, and high-availability networking. Ensure seamless failover and data consistency across cluster nodes. Perform Linux OS hardening, security patching, performance tuning, and troubleshooting. Physical Server Maintenance & Hardware Management Perform physical server installation, diagnostics, firmware upgrades, and maintenance. Work with SAN/NAS storage, network switches, and power management in data centers. Implement out-of-band management (IPMI/iLO/DRAC) for remote server monitoring and recovery. • Ensure hardware resilience, failure prediction, and proper capacity planning. Automation, Monitoring & Performance Optimization • Automate infrastructure provisioning, monitoring, and self-healing capabilities. Implement Prometheus, Grafana, and custom scripting via API for proactive monitoring. • Optimize compute, storage, and network performance in large-scale environments. • Implement disaster recovery (DR) and backup solutions for cloud workloads. Collaboration & Documentation • Work closely with DevOps, Enterprise Support, and software Developers to streamline cloud workflows. • Maintain detailed infrastructure documentation, playbooks, and incident reports. Train and mentor junior engineers on CloudStack, Kubernetes, and HA Clustering. The Must-Haves - 5+ years of experience in CloudStack or similar virtualization platform, Kubernetes, and Linux system administration. Strong expertise in Apache CloudStack (4.19+) or similar virtualization platform, KVM hypervisor, and Cluster API (CAPI). Extensive experience in RedHat HA Clustering (Pacemaker, Corosync) and GFS2 shared storage. Proficiency in OpenTofu, Ansible, Bash, Python, and Go for infrastructure automation. Experience with networking (VXLAN, SDN, BGP) and security best practices. Hands-on expertise in physical server maintenance, IPMI/iLO, RAID, and SAN storage. Strong troubleshooting skills in Linux performance tuning, logs, and kernel debugging. Knowledge of monitoring tools (Prometheus, Grafana, Alert manager). Preferred Qualifications • Experience with multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. • Familiarity with CloudStack API customization, plugin development. • Strong background in disaster recovery (DR) and backup solutions for cloud environments. • Understanding of service meshes, ingress, and SSO. • Experience is Cisco UCS platform management. Over the years, we’ve discovered that the most effective and successful associates at apexanalytix are people who have a specific combination of values, skills, and behaviors that we call “The apex Way”. Read more about The apex Way - https://www.apexanalytix.com/careers/ Benefits At apexanalytix we know that our associates are the reason behind our successes. We truly value you as an associate and part of our professional family. Our goal is to offer the very best benefits possible to you and your loved ones. When it comes to benefits, whether for yourself or your family the most important aspect is choice. And we get that. apexanalytix offers competitive benefits for the countries that we serve, in addition to our BeWell@apex initiative that encourages employees’ growth in six key wellness areas: Emotional, Physical, Community, Financial, Social, and Intelligence. With resources such as a strong Mentor Program, Internal Training Portal, plus Education, Tuition, and Certification Assistance, we provide tools for our associates to grow and develop.
Posted 3 days ago
1.0 years
11 - 13 Lacs
Pune
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 3 days ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager, Quality Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s’ IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. What Will You Do In This Role Develop and Implement Advanced Automated Testing Frameworks Architect, design, and maintain sophisticated automated testing frameworks for data pipelines and ETL processes, ensuring robust data quality and reliability. Conduct Comprehensive Quality Assurance Testing Lead the execution of extensive testing strategies, including functional, regression, performance, and security testing, to validate data accuracy and integrity across the bronze layer. Monitor and Enhance Data Reliability Collaborate with the data engineering team to establish and refine monitoring and alerting systems that proactively identify data quality issues and system failures, implementing corrective actions as needed. Leverage Generative AI Innovate and apply generative AI techniques to enhance testing processes, automate complex data validation scenarios, and improve overall data quality assurance workflows. Collaborate with Cross-Functional Teams Serve as a key liaison between Data Engineers, Product Analysts, and other stakeholders to deeply understand data requirements and ensure that testing aligns with strategic business objectives. Document and Standardize Testing Processes Create and maintain comprehensive documentation of testing procedures, results, and best practices, facilitating knowledge sharing and continuous improvement across the organization. Drive Continuous Improvement Initiatives Lead efforts to develop and implement best practices for QA automation and reliability, including conducting code reviews, mentoring junior team members, and optimizing testing processes. What You Should Have Educational Background Bachelor's degree in computer science, Engineering, Information Technology, or a related field Experience 4+ years of experience in QA automation, with a strong focus on data quality and reliability testing in complex data engineering environments. Technical Skills Advanced proficiency in programming languages such as Python, Java, or similar for writing and optimizing automated tests. Extensive experience with testing frameworks and tools (e.g., Selenium, JUnit, pytest) and data validation tools, with a focus on scalability and performance. Deep familiarity with data processing frameworks (e.g., Apache Spark) and data storage solutions (e.g., SQL, NoSQL), including performance tuning and optimization. Strong understanding of generative AI concepts and tools, and their application in enhancing data quality and testing methodologies. Proficiency in using Jira Xray for advanced test management, including creating, executing, and tracking complex test cases and defects. Analytical Skills Exceptional analytical and problem-solving skills, with a proven ability to identify, troubleshoot, and resolve intricate data quality issues effectively. Communication Skills Outstanding verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical stakeholders. Preferred Qualifications Experience with Cloud Platforms Extensive familiarity with cloud data services (e.g., AWS, Azure, Google Cloud) and their QA tools, including experience in cloud-based testing environments. Knowledge of Data Governance In-depth understanding of data governance principles and practices, including data lineage, metadata management, and compliance requirements. Experience with CI/CD Pipelines Strong knowledge of continuous integration and continuous deployment (CI/CD) practices and tools (e.g., Jenkins, GitLab CI), with experience in automating testing within CI/CD workflows. Certifications Relevant certifications in QA automation or data engineering (e.g., ISTQB, AWS Certified Data Analytics) are highly regarded. Agile Methodologies Proven experience working in Agile/Scrum environments, with a strong understanding of Agile testing practices and principles. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345312 Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Apache is a widely used software foundation that offers a range of open-source software solutions. In India, the demand for professionals with expertise in Apache tools and technologies is on the rise. Job seekers looking to pursue a career in Apache-related roles have a plethora of opportunities in various industries. Let's delve into the Apache job market in India to gain a better understanding of the landscape.
These cities are known for their thriving IT sectors and see a high demand for Apache professionals across different organizations.
The salary range for Apache professionals in India varies based on experience and skill level. - Entry-level: INR 3-5 lakhs per annum - Mid-level: INR 6-10 lakhs per annum - Experienced: INR 12-20 lakhs per annum
In the Apache job market in India, a typical career path may progress as follows: 1. Junior Developer 2. Developer 3. Senior Developer 4. Tech Lead 5. Architect
Besides expertise in Apache tools and technologies, professionals in this field are often expected to have skills in: - Linux - Networking - Database Management - Cloud Computing
As you embark on your journey to explore Apache jobs in India, it is essential to stay updated on the latest trends and technologies in the field. By honing your skills and preparing thoroughly for interviews, you can position yourself as a competitive candidate in the Apache job market. Stay motivated, keep learning, and pursue your dream career with confidence!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.