Jobs
Interviews

10828 Apache Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of data quality frameworks and best practices. Additional Information: - The candidate should have minimum 7.5 years of experience in Apache Spark. - This position is based in Chennai. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of data pipeline architecture and design. - Experience with ETL processes and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of data quality frameworks and best practices. Additional Information: - The candidate should have minimum 7.5 years of experience in Apache Spark. - This position is based in Chennai. - A 15 years full time education is required.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JR0124760 Junior Associate, Solution Engineering – Pune, India Are you looking to build a career in the financial services sector? How about unleashing your skills in a hugely successful business that is committed to moving money for better? Join Western Union as a Junior Associate, Solution Engineering . Western Union powers your pursuit. In this exciting role you will be utilizing your skills in developing, enhancing, working on functionality issues and implementation of new technologies to create a platform which helps the world to move money for better. As a Junior Associate, you will be responsible for the implementation of a service delivery model supporting assigned platforms through a predefined framework. Role Responsibilities You will be responsible for developing and implementing new software but also maintaining and improving existing software. You ensure that software functionality is implemented with a focus on code optimization, and you recommend improvements to existing software programs. Role Requirements 2-4 years of experience in Java Development, Java11 and above Excellent programming skills with test driven development approach Familiar with Microservices driven architecture Hands on experience in Spring Boot, Microservices, Hibernate, JBOSS server. Hands on experience in using SOAP and RESTful webservices. Databases RDBMS/NoSQL, No-SQL database (preferably Couchbase) GIT, Maven(pom),Docker is a must have CI/CD pipeline(DevOps) experience is optional. Design Patterns experience will be a plus. Cloud development experience with AWS will be a plus. Excellent in understanding the requirement and consistent focus on delivering high quality. Camunda Workflow,Apache Camel Orchestrator is optional. We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few (https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Cab Facility Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, problem-solve together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation is to work from the office a minimum of three days a week. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation to applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 08-01-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : AWS Architecture Good to have skills : Python (Programming Language) Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions that enhance data accessibility and usability. AWS Data Architect to lead the design and implementation of scalable, cloud-native data platforms. The ideal candidate will have deep expertise in AWS data services, along with hands-on proficiency in Python and PySpark for building robust data pipelines and processing frameworks. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. - Design and implement enterprise-scale data lake and data warehouse solutions on AWS. - Lead the development of ELT/ETL pipelines using AWS Glue, EMR, Lambda, and Step Functions, with Python and PySpark. - Work closely with data engineers, analysts, and business stakeholders to define data architecture strategy. - Define and enforce data modeling, metadata, security, and governance best practices. - Create reusable architectural patterns and frameworks to streamline future development. - Provide architectural leadership for migrating legacy data systems to AWS. - Optimize performance, cost, and scalability of data processing workflows. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Architecture. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and data integration techniques. - Familiarity with data warehousing concepts and technologies. - Knowledge of programming languages such as Python or Java for data processing. - AWS Services: S3, Glue, Athena, Redshift, EMR, Lambda, IAM, Step Functions, CloudFormation or Terraform - Languages: Python ,PySpark .SQL - Big Data: Apache Spark, Hive, Delta Lake - Orchestration & DevOps: Airflow, Jenkins, Git, CI/CD pipelines - Security & Governance: AWS Lake Formation, Glue Catalog, encryption, RBAC - Visualization: Exposure to BI tools like QuickSight, Tableau, or Power BI is a plus Additional Information: - The candidate should have minimum 5 years of experience in AWS Architecture. - This position is based at our Pune office. - A 15 years full time education is required.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. - Experience in programming languages such as Python or Scala for data processing. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bengaluru office. - A 15 years full time education is required.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Familiarity with programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms. Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources. Build and support data pipelines for reporting, analytics, and machine learning applications. Implement and manage streaming data solutions using Kafka and other technologies. Design and optimize database schemas and data models in ClickHouse and other databases. Develop and maintain data workflows using Apache Airflow and similar orchestration tools. Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks. Collaborate with data scientists to implement ML infrastructure for model training and deployment. Ensure data quality, reliability, and security across all data platforms. Monitor data pipelines and implement proactive alerting systems. Troubleshoot and resolve data infrastructure issues. Document data flows, architectures, and processes. Stay current with industry trends and emerging technologies in data engineering. Requirements Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred). 5+ years of experience in data engineering roles. Strong expertise in AWS and/or GCP cloud platforms and services. Proficiency in building data pipelines using modern ETL/ELT tools and frameworks. Experience with stream processing technologies such as Kafka. Hands-on experience with ClickHouse or similar analytical databases. Strong programming skills in Python and experience with PySpark. Experience with workflow orchestration tools like Apache Airflow. Solid understanding of data modeling, data warehousing concepts, and dimensional modeling. Knowledge of SQL and NoSQL databases. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work in cross-functional teams. Experience in D2C, e-commerce, or retail industries. Knowledge of data visualization tools (Tableau, Looker, Power BI). Experience with real-time analytics solutions. Familiarity with CI/CD practices for data pipelines. Experience with containerization technologies (Docker, Kubernetes). Understanding of data governance and compliance requirements. Experience with MLOps or ML engineering Technologies. Cloud Platforms: AWS (S3 Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc). Data Processing: Apache Spark, PySpark, Python, SQL. Streaming: Apache Kafka, Kinesis. Data Storage: ClickHouse, S, 3 BigQuery, PostgreSQL, MongoDB. Orchestration: Apache Airflow. Version Control: Git. Containerization: Docker, Kubernetes (optional). This job was posted by Sidharth Patra from Traya Health.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

We are looking for a highly skilled and hands-on Senior Data Engineer to join our growing data engineering practice in Mumbai. This role requires deep technical expertise in building and managing enterprise-grade data pipelines, with a primary focus on Amazon Redshift, AWS Glue, and data orchestration using Airflow or Step Functions. You will be responsible for building scalable, high-performance data workflows that ingest and process multi-terabyte-scale data across complex, concurrent environments. The ideal candidate is someone who thrives in solving performance bottlenecks, has led or participated in data warehouse migrations (e. g., Snowflake to Redshift), and is confident in interfacing with business stakeholders to translate requirements into robust data solutions. Responsibilities Design, develop, and maintain high-throughput ETL/ELT pipelines using AWS Glue (PySpark), orchestrated via Apache Airflow or AWS Step Functions. Own and optimize large-scale Amazon Redshift clusters and manage high concurrency workloads for a very large user base: Lead and contribute to migration projects from Snowflake or traditional RDBMS to Redshift, ensuring minimal downtime and robust validation. Integrate and normalize data from heterogeneous sources, including REST APIs, AWS Aurora (MySQL/Postgres), streaming inputs, and flat files. Implement intelligent caching strategies, leverage EC2 and serverless compute (Lambda, Glue) for custom transformations and processing at scale. Write advanced SQL for analytics, data reconciliation, and validation, demonstrating strong SQL development and tuning experience. Implement comprehensive monitoring, alerting, and logging for all data pipelines to ensure reliability, availability, and cost optimization. Collaborate directly with product managers, analysts, and client-facing teams to gather requirements and deliver insights-ready datasets. Champion data governance, security, and lineage, ensuring data is auditable and well-documented across all environments. Requirements 2-4 years of core data engineering experience, especially focused on Amazon Redshift hands-on performance tuning and large-scale management capacity. Demonstrated experience handling multi-terabyte Redshift clusters, concurrent query loads, and managing complex workload segmentation and queue priorities. Strong experience with AWS Glue (PySpark) for large-scale ETL jobs. Solid understanding and implementation experience of workflow orchestration using Apache Airflow or AWS Step Functions. Strong proficiency in Python, advanced SQL, and data modeling concepts. Familiarity with CI/CD pipelines, Git, DevOps processes, and infrastructure-as-code concepts. Experience with Amazon Athena, Lake Formation, or S3-based data lakes. Hands-on participation in Snowflake, BigQuery, or Teradata migration projects. AWS Certifications such as: AWS Certified Data Analytics - Specialty. AWS Certified Solutions Architect - Associate/Professional. Exposure to real-time streaming architectures or Lambda architectures. Soft Skills & Expectations Excellent communication skills enable able to confidently engage with both technical and non-technical stakeholders, including clients. Strong problem-solving mindset and a keen attention to performance, scalability, and reliability. Demonstrated ability to work independently, lead tasks, and take ownership of large-scale systems. Comfortable working in a fast-paced, dynamic, and client-facing environment. This job was posted by Rituza Rani from Oneture Technologies.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

As a Blis data engineer, we seek to understand the data and problem definition and find efficient solutions, so critical thinking is a key component to efficient pipelines and effective reuse. This must include defining the pipelines for the correct controls and recovery points, not only the function and scale. Across the team, everyone supports each other through mentoring, brainstorming, and pairing up. They have a passion for delivering products that delight and astound our customers and that have a long-lasting impact on the business. They do this while also optimising themselves and the team for long-lasting agility, which is often synonymous with practicing Good Engineering. They are almost always adherents of Lean Development and work well in environments with significant amounts of freedom and ambitious goals. Responsibilities Design, build, monitor, and support large-scale data processing pipelines. Support, mentor, and pair with other members of the team to advance our team's capabilities and capacity. Help Blis explore and exploit new data streams to innovate and support commercial and technical growth. Work closely with Product and be comfortable with taking, making, and delivering against fast-paced decisions to delight our customers. This ideal candidate will be comfortable with fast feature delivery with a robust engineered follow-up. Requirements 5+ years direct experience delivering robust, performant data pipelines within the constraints of direct SLA's and commercial financial footprints. Proven experience in architecting, developing, and maintaining Apache Druid and Imply platforms, with a focus on DevOps practices and large-scale system re-architecture. Mastery of building Pipelines in GCP, maximising the use of native and native supporting technologies e. g. Apache Airflow. Mastery of Python for data and computational tasks with fluency in data cleansing, validation, and composition techniques. Hands-on implementation and architectural familiarity with all forms of data sourcing i. e streaming data, relational and non-relational databases, and distributed processing technologies (e. g. Spark). Fluency with all appropriate Python libraries typical of data science e. g. pandas, scikit-learn, scipy, numpy, MLlib, and/or other machine learning and statistical libraries. Advanced knowledge of cloud-based services, specifically GCP. Excellent working understanding of server-side Linux. Professional in managing and updating tasks, ensuring appropriate levels of documentation, testing, and assurance around their solutions. Desired Experience optimizing both code and config in Spark, Hive, or similar tools. Practical experience working with relational databases, including advanced operations such as partitioning and indexing. Knowledge and experience with tools like AWS Athena or Google BigQuery to solve data-centric problems. Understanding and ability to innovate, apply, and optimize complex algorithms and statistical techniques to large data structures. Experience with Python Notebooks, such as Jupyter, Zeppelin, or Google Datalab, to analyze, prototype, and visualize data and algorithmic output. This job was posted by Jaina M from Blis.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting any issues that arise in the data flow. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge in data engineering. - Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data warehousing solutions. - Strong understanding of data modeling and database design principles. - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Familiarity with programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Hyderabad. - A 15 years full time education is required., 15 years full time education

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction A career in IBM Consulting embraces long-term relationships and close collaboration with clients across the globe. In this role, you will work for IBM BPO, part of Consulting that, accelerates digital transformation using agile methodologies, process mining, and AI-powered workflows. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including IBM Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be supported by mentors and coaches who will encourage you to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and learning opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities We are looking for a passionate and skilled Full stack Developer with 3–6 years of experience to join our dynamic team. The ideal candidate should have strong expertise in AngularJS and python3 along with a solid understanding of modern web technologies. Experience in building analytical dashboards using tools like Apache Superset is a big plus. Key Responsibilities Develop and maintain web applications using Angular 16+ and Python 3. Write clean, scalable, and well-documented code for both frontend and backend. Integrate RESTful APIs and third-party services. Collaborate with UI/UX designers and backend engineers to deliver user-centric solutions. Optimize applications for performance, scalability, and security. Write and maintain unit and integration tests to ensure software quality. Participate in code reviews and provide constructive feedback. Troubleshoot, debug, and upgrade existing systems. Preferred Education Master's Degree Required Technical And Professional Expertise 3–6 years of hands-on experience as a Full Stack Developer. Strong proficiency in: AngularJS (v16 and above) JavaScript, TypeScript HTML5, SCSS/CSS3 Node.js, Express.js Python3 especially in building REST APIs (Fast API/Flask). Familiarity with relational (e.g., PostgreSQL, MySQL) / NoSQL databases (e.g., MongoDB). Good understanding of RESTful APIs and asynchronous programming. Preferred Technical And Professional Experience Experience developing analytical dashboards or data visualization platforms. Exposure to tools like Apache Superset, D3.js, or similar libraries. Understanding of CI/CD processes and version control systems like Git. Ability to work in agile environments and cross-functional teams.

Posted 1 week ago

Apply

4.0 - 8.0 years

4 - 9 Lacs

Pune

Work from Office

Programming: Python, SQL Databases: Relational (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra), Vector Databases (FAISS) APIs and Integration: Protocols (REST, GraphQL), Frameworks (Django, FastAPI) Big Data Frameworks: Hadoop, Spark Nice to have Data Processing: Apache Flink, Apache Kafka Large Language Models (LLMs): Integration of domain knowledge (RAG), Prompt Engineering, Embeddings

Posted 1 week ago

Apply

9.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Python Lead Developer Location: Chennai (Work From Office – 5 Days a Week) Type: Full-time | Permanent Experience: 9 to 12 Years Job Summary We are looking for a Lead - Python Developer / Tech Lead to lead backend development and manage a team working on enterprise-grade, data-driven applications . This is an excellent opportunity to work with modern technologies like FastAPI , Apache Spark , and Lakehouse architectures , while leading a team , guiding technical decisions, and driving delivery in a fast-paced environment. Key Responsibilities Lead and mentor a team of Python developers Manage task allocation, code quality, and technical delivery Architect and implement scalable RESTful APIs using Python and FastAPI Handle large-scale data processing with Pandas , NumPy , Apache Spark Drive the adoption of Lakehouse architectures and data pipelines Conduct code reviews , enforce best practices, and ensure clean, testable code Collaborate with cross-functional teams including DevOps and Data Engineering Contribute to CI/CD , work in Linux-based environments , and optionally with Kubernetes or MLOps tools Key Skills & Experience 9–12 years of total experience in software development Strong expertise in Python , FastAPI , and modern backend frameworks Deep understanding of data engineering workflows , Spark , and distributed systems Experience leading agile teams or playing a tech lead role Proficient with unit testing , Linux , and working in cloud/data environments Exposure to Kubernetes , ML Pipelines , or MLOps is a plus

Posted 1 week ago

Apply

0 years

0 Lacs

Haryana, India

On-site

Lenskart – Tech@Lenskart – Devops Exp: You are a person whose day-to-day job would involve writing Python scripts, making infrastructure changes via terraform. Strong understanding of Linux. We breathe on Linux. Must have a knack of automating manual efforts. If you have to do something more than 3 times manually you are the person who would hate this. Engage with cross-functional teams in design, development and implementation of DevOps capabilities related to enabling higher developer productivity, environment monitoring and self-healing. Should have good knowledge in AWS. Excellent troubleshooting skills as it would be part of day to day work. Working knowledge of Kubernetes and Docker (any container technology) in production. Understanding of CI/CD pipeline of how it works and how it can be implemented. Should have a knack to identify performance bottlenecks and maturing the monitoring and alerting systems. Good knowledge of monitoring and Logging tools like Grafana/ Prometheus / ELK / Sumologic / NewRelic. Ability to work on-call and respond to production failures. Should be self-motivated as most of the time the person has to drive a project or find performance issues or do POC's independently. You are a person who will be happy to write articles about your leanings and share within the company and in the community. You might be a person who is ready to challenge the architecture for longer performance gains. You know how SSL/TCP-IP/VPN/CDN/DNS/LoadBalancing works. Essential Skills B.E./B.Tech in CS/IT or equivalent technical qualifications. Knowledge of Amazon web services (AWS) would be a big plus. Experience in administering/managing Windows or LINUX systems. Hands-on experience in AWS, Jenkins, Git, Chef. Experience with the various application servers (Apache, Nginx , varnish etc). Experience in Python, Chef & Terraform, Kubernetes, dockers. Experience installing, upgrading, and maintaining application servers on the LINUX platform

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: Quality Engineer (Data) Job Summary We are seeking a highly skilled Quality Engineer with 5-10 years of professional experience to ensure the integrity, reliability, and performance of our data pipelines and AI/ML solutions within the SmartFM platform. The ideal candidate will be responsible for defining and implementing comprehensive quality assurance strategies for data ingestion, transformation, storage, and the machine learning models that generate insights from alarms and notifications received from various building devices. This role is crucial in delivering high-quality, trustworthy data and intelligent recommendations to optimize facility operations. Roles And Responsibilities Develop and implement end-to-end quality assurance strategies and test plans for data pipelines, data transformations, and machine learning models within the SmartFM platform. Design, develop, and execute test cases for data ingestion processes, ensuring data completeness, consistency, and accuracy from various sources, especially those flowing through IBM StreamSets and Kafka. Perform rigorous data validation and quality checks on data stored in MongoDB, including schema validation, data integrity checks, and performance testing of data retrieval. Collaborate closely with Data Engineers to ensure the robustness and scalability of data pipelines and to identify and resolve data quality issues at their source. Work with Data Scientists to validate the performance, accuracy, fairness, and robustness of Machine Learning, Deep Learning, Agentic Workflows, and LLM-based models. This includes testing model predictions, evaluating metrics, and identifying potential biases. Implement automated testing frameworks for data quality, pipeline validation, and model performance monitoring. Monitor production data pipelines and deployed models for data drift, concept drift, and performance degradation, setting up appropriate alerts and reporting mechanisms. Participate in code reviews for data engineering and data science components, ensuring adherence to quality standards and best practices. Document testing procedures, test results, and data quality metrics, providing clear and actionable insights to cross-functional teams. Stay updated with the latest trends and tools in data quality assurance, big data testing, and MLOps, advocating for continuous improvement in our quality processes. Required Technical Skills And Experience 5-10 years of professional experience in Quality Assurance, with a significant focus on data quality, big data testing, or ML model testing. Strong proficiency in SQL for complex data validation, querying, and analysis across large datasets. Hands-on experience with data pipeline technologies like IBM StreamSets and Apache Kafka. Proven experience in testing and validating data stored in MongoDB or similar NoSQL databases. Proficiency in Python for scripting, test automation, and data validation. Familiarity with Machine Learning and Deep Learning concepts, including model evaluation metrics, bias detection, and performance testing. Understanding of Agentic Workflows and LLMs from a testing perspective, including prompt validation and output quality assessment. Experience with cloud platforms (Azure, AWS, or GCP) and their data/ML services. Knowledge of automated testing frameworks and tools relevant to data and ML (e.g., Pytest, Great Expectations, Deepchecks). Familiarity with Node.js and React environments to understand system integration points. Additional Qualifications Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts related to data quality and model performance for diverse audiences. Exceptional problem-solving and analytical skills with a keen eye for detail in data. Experienced in collaborating seamlessly with Data Engineers, Data Scientists, Software Engineers, and Product Managers. Highly motivated to acquire new skills, explore emerging technologies in data quality and AI/ML testing, and stay updated on the latest industry best practices. Domain knowledge in facility management, IoT, or building automation is a plus. Education Requirements / Experience Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Engineering, Statistics, or a related field.

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: Data Engineer Job Summary We are seeking an experienced Data Engineer with 5-8 years of professionalexperience to design, build, and optimize robust and scalable data pipelines for our SmartFM platform. The ideal candidate will be instrumental in ingesting, transforming, and managing vast amounts of operational data from various building devices, ensuring high data quality and availability for analytics and AI/ML applications. This role is critical in enabling our platform to generate actionable insights, alerts, and recommendations for optimizing facility operations. Roles And Responsibilities Design, develop, and maintain scalable and efficient data ingestion pipelines from diverse sources (e.g., IoT devices, sensors, existing systems) using technologies like IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Kafka. Implement robust data transformation and processing logic to clean, enrich, and structure raw data into formats suitable for analysis and machine learning models. Manage and optimize data storage solutions, primarily within MongoDB, ensuring efficient schema design, data indexing, and query performance for large datasets. Collaborate closely with Data Scientists to understand their data needs, provide high-quality, reliable datasets, and assist in deploying data-driven solutions. Ensure data quality, consistency, and integrity across all data pipelines and storage systems, implementing monitoring and alerting mechanisms for data anomalies. Work with cross-functional teams (Software Engineers, Data Scientists, Product Managers) to integrate data solutions with the React frontend and Node.js backend applications. Contribute to the continuous improvement of data architecture, tooling, and best practices, advocating for scalable and maintainable data solutions. Troubleshoot and resolve complex data-related issues, optimizing pipeline performance and ensuring data availability. Stay updated with emerging data engineering technologies and trends, evaluating and recommending new tools and approaches to enhance our data capabilities. Required Technical Skills And Experience 5-8 years of professional experience in Data Engineering or a related field. Proven hands-on experience with data pipeline tools such as IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Apache Kafka. Strong expertise in database management, particularly with MongoDB, including schema design, data ingestion pipelines, and data aggregation. Proficiency in at least one programming language commonly used in data engineering, such as Python or Java/Scala. Experience with big data technologies and distributed processing frameworks (e.g., Apache Spark, Hadoop) is highly desirable. Familiarity with cloud platforms (Azure, AWS, or GCP) and their data services. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. Experience with DevOps practices for data pipelines (CI/CD, monitoring, logging). Knowledge of Node.js and React environments to facilitate seamless integration with existing applications. Additional Qualifications Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. Strong problem-solving and analytical skills with a meticulous approach to data quality. Experienced in collaborating and communicating seamlessly with diverse technology roles, including development, support, and product management. Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in data engineering and business needs. Experience in the facility management domain or IoT data is a plus. Education Requirements / Experience Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Mathematics, Statistics, or a related quantitative field.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Opentext - The Information Company OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. Your Impact Technical Support Specialists are responsible for delivering highest quality technical support on OpenText products, addressing the customer’s concerns not just at a technical level but also from a customer service perspective. Our Technical Support Specialist position offers you an opportunity to learn exciting technologies and exercise critical and creative thinking as you’ll work on unique customer issues to provide resolutions. What The Role Offers Represent OpenText acting as first point of contact for all technical support inquiries. Incident management and collaboration with other teams while adhering to SLA’s and KPI’sUtilize exceptional written and verbal communication skills while supporting customers while demonstrating a high level of customer focus and empathy .Meet established service delivery guidelines and key performance indicators that are measured through customer satisfaction surveysCollaborate with various stakeholders to act as a trusted customer advocate. What You Need To Succeed 2 - 4 years of prior experience working on relevant technologies Focused on scoping problems and strong troubleshooting ability University/College degree within a related discipline Willingness to work in shifts during weekdays and on-call during weekends OS - Win/Linux - OS fundamentals, Troubleshooting fundamentals, Logs Webservers - IIS/WebSphere/WebLogic/apache/tomcat/Jboss Knowledge of TCP/IP, Networking, Firewalls and troubleshooting, and traffic analysis, (e.g. Wireshark) OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description What will you be doing? Develop Real Time Streaming & Batch Data Pipelines. Deliver high-quality data engineering components and services that are robust and scalable. Collaborate and communicate effectively with cross-functional teams to ensure delivery of strong results. Employ methodical approaches to Data Modeling, Data Quality, and Data Governance. Provide guidance on architecture, design, and quality engineering practices to the team. Leverage foundational Data Infrastructure to support analytics, BI, and visualization layers. Work closely with data scientists on feature engineering, model training frameworks, and model deployments at scale. What are we looking for? BS/MS in Computer Science or related field, or an equivalent combination of education and experience. A minimum of 6 years of experience in software engineering, with hands-on experience in building data pipelines and big data technologies. Proficiency with Big Data technologies such as Apache Spark, Apache Iceberg, Amazon Redshift, Athena, EMR, and other AWS services (S3, Lambda, EMR). Expertise in at least one programming language: Python, Java, or Scala. Extensive experience in designing and building data models, integrating data from various sources, building ETL/ELT and data-flow pipelines, and supporting all parts of the data platform. Expert-level SQL programming knowledge and experience. Experience with any enterprise reporting and/or data visualization tools like Strategy, Cognos, Tableau, Looker, PowerBI, Superset, QlikView etc. Strong data analysis skills, capable of making data-driven arguments and effective visualizations. Energetic, enthusiastic, and detail-oriented. Bonus Points Experience in e-commerce/retail domain. Knowledge on StarRocks. Knowledge in Web Services, API integration, and data exchanges with third parties. Familiarity with basic statistical analysis and machine learning concepts. A passion for producing high-quality analytics deliverables.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Purpose As a key member of the support team, the Application Support Engineer is responsible for ensuring the stability and availability of critical applications. This role involves monitoring, troubleshooting, and resolving application issues, adhering to defined SLAs and processes. Desired Skills And Experience Experience in an application support or technical support role with strong troubleshooting, problem-solving, and analytical skills. Ability to work independently and effectively and to thrive in a fast-paced, high-pressure environment. Experience in either C# or Java preferred, to support effective troubleshooting and understanding of application code Knowledge of various operating systems (Windows, Linux, macOS) and familiarity with software applications and tools used in the industry. Proficiency in programming languages such as Python, and scripting languages like Bash or PowerShell. Experience with database systems such as MySQL, Oracle, SQL Server, and the ability to write and optimize SQL queries. Understanding of network protocols, configurations, and troubleshooting network-related issues. Skills in managing and configuring servers, including web servers (Apache, Nginx) and application servers (Desirable) Familiarity with ITIL incident management processes. Familiarity with monitoring and logging tools like Nagios, Splunk, or ELK stack to track application performance and issues. Knowledge of version control systems like Git to manage code changes and collaborate with development teams. (Desirable) Experience with cloud platforms such as AWS, Azure, or Google Cloud for deploying and managing applications. (Desirable) Experience in Fixed Income Markets or financial applications support is preferred Strong attention to detail and ability to follow processes. Ability to adapt to changing priorities and client needs with good verbal and written communication skills. Key Responsibilities Provide L1/L2 technical support for applications Monitor application performance and system health, proactively identifying potential issues. Investigate, diagnose, and resolve application incidents and service requests within agreed SLAs. Escalate complex or unresolved issues to the Service Manager or relevant senior teams. Document all support activities, including incident details, troubleshooting steps, and resolutions. Participate in shift handovers and knowledge sharing. Perform routine maintenance tasks to ensure optimal application performance. Collaborate with other support teams to ensure seamless issue resolution. Develop and maintain technical documentation and knowledge base articles. Assist in the implementation of new applications and updates. Provide training and support to junior team members.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description and Requirements "At BMC trust is not just a word - it's a way of life!" Hybrid Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The IZOT product line includes BMC’s Intelligent Z Optimization & Transformation products, which help the world’s largest companies to monitor and manage their mainframe systems. The modernization of mainframe is the beating heart of our product line, and we achieve this goal by developing products that improve the developer experience, the mainframe integration, the speed of application development, the quality of the code and the applications’ security, while reducing operational costs and risks. We acquired several companies along the way, and we continue to grow, innovate, and perfect our solutions on an ongoing basis. BMC is looking for a Product Owner to join our amazing team! The BMC AMI Cloud Analytics product can quickly transfer, transform, and integrate mainframe data so it could be shared with the organizational data lake to be used by artificial intelligence, machine learning (AI/ML) and analytics solutions. In this role, you will lead the transformation of this cutting-edge product originally developed by Model9, a startup acquired by BMC, into a solution designed to meet the rigorous demands of enterprise customers. This exciting opportunity combines innovation, scalability, and leadership, giving you a chance to shape the product’s evolution as it reaches new heights in enterprise markets. You’ll analyze business opportunities, specify and prioritize customer requirements, and guide product development teams to deliver cutting-edge solutions that resonate with global B2B customers. As a product owner, you will be or become an expert on the product, market, and related business domains. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Lead the transformation of a startup-level solution from Model9 into a robust enterprise-grade product, addressing the complex needs of global organizations. Collaborate with engineering and QA teams to ensure technical feasibility, resolve roadblocks, and deliver solutions that align with customer needs Help plan product deliveries, including documenting detailed requirements, scheduling releases, and publishing roadmaps. Maintaining a strategic backlog of prioritized features. Drive cross-functional collaboration across development, QA, product management, and support teams to ensure seamless product delivery and customer satisfaction. Distil complex business and technical requirements into clear, concise PRD's and prioritized feature backlogs. To ensure you’re set up for success, you will bring the following skillset & experience: 3+ years of software product owner experience in an enterprise/B2B software company, including experience working with global B2B customers Solid technical background (preferably previous experience as a developer or QA) Deep familiarity with public cloud services and storage services (AWS EC2/FSx/EFS/EBS/S3, RDS, Aurora, etc.,) Strong understanding of ETL/ELT solutions and data transformation techniques Knowledge of modern data Lakehouse architectures (e.g., Databricks, Snowflake). B.Sc. in a related field (preferably Software Engineering or similar) or equivalent Experience leading new products and product features through ideation, research, planning, development, go-to-market and feedback cycles Fluent English, spoken and written. Willingness to travel, typically 1-2 times a quarter Whilst these are nice to have, our team can help you develop in the following skills: Background as DBA or system engineer with hands-on experience with commercial and open-source databases like MSSQL, Oracle, PostgreSQL, etc. Knowledge / experience of agile methods (especially lean); familiarity with Aha!, Jira, Confluence. Experience with ETL/ELT tools (e.g., Apache NiFi, Qlik, Precisely, Informatica, Talend, AWS Glue, Azure Data Factory). Understanding of programming languages commonly used on z/OS, such as COBOL, PL/I, REXX, and assembler. Understanding of z/OS subsystems such as JES2/JES3, RACF, DB2, CICS, MQ, and IMS. Experien ce in Cloud-based products and technologies (containerization, serverless approaches, vendor-specific cloud services, cloud security) CA-DNP Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 2,790,000 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply.

Posted 1 week ago

Apply

10.0 - 15.0 years

50 - 65 Lacs

Noida

Work from Office

Role Summary Digital Experience (DX) ( https: / / www.adobe.com / experience-cloud.html) is a USD 4B+ business serving the needs of enterprise businesses including 95%+ of fortune 500 organizations. Adobe Experience Manager, within Adobe DX is the world s largest CMS platform, is a solution that helps enterprises create, manage, and deliver digital experiences across various channels like websites, mobile apps, and digital signage. According to a Forrester report, Experience Manager is the most robust CMS on the market. More than 128,000 websites rely on the agile setup of Experience Manager to manage their content. We are looking for strong and passionate engineers/managers to join our team as we scale the business by building the next gen products and adding customer value to our existing offerings. If you re passionate about innovative technology, then we would be excited to talk to you! What youll Do Mentor and guide a high-performing engineering team to deliver outstanding results Lead the technical design, vision, and implementation strategy for next-gen Multi-cloud services Partner with global leaders to help craft product architecture, roadmap, and release plans Drive strategic decisions ensuring successful project delivery and high code quality Apply standard methodologies and coding patterns to develop maintainable and modular solutions Optimize team efficiency through innovative engineering processes and teamwork models Attract, hire, and retain top talent while encouraging a positive, collaborative culture Lead discussions on emerging industry technologies and influence product direction What you need to succeed 12+ years of experience in software development with a proven leadership track record, min 3 years as manager leading a team of high performing full stack engineers. Proficiency in Java/JSP for backend development and experience with frontend technologies like React, Angular, or JQuery Experience with cloud platforms such as AWS or Azure Proficiency in version control, CI/CD pipelines, and DevOps practices Familiarity with Docker, Kubernetes, and Infrastructure as Code tools Experience with Web-Sockets, or event-driven architectures Deep understanding of modern software architecture, including microservices and API-first development Proven usage of AI/GenAI engineering productivity tools like github copilot, cursor. Practical experience with Python would be helpful. Exposure to open source contribution models to Apache, Linux foundation projects or any other 3rd party frameworks would be an added advantage. Strong problem-solving, analytical, and decision-making skills Excellent communication, collaboration, and management skills Passion for high-quality software and improving engineering processes BS/MS or equivalent experience in Computer Science or a related field .

Posted 1 week ago

Apply

0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

AWS/Azure/GCP, Linux, shell scripting, IaaC, Docker, Kubernetes, Jenkins, GitHub A day in the life of an Infosys Equinox employee: As part of the Infosys Equinox delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. A Clear understanding of HTTP / Network protocol concepts, designs & operations - TCP dump, Cookies, Sessions, Headers, Client Server Architecture. Core strength in Linux and Azure infrastructure provisioning including VNet, Subnet, Gateway, VM, Security groups, MySQL, Blob Storage, Azure Cache, AKS Cluster etc. Expertise with automating Infrastructure as a code using Terraform, Packer, Ansible, Shell Scripting and Azure DevOps. Expertise with patch management, APM tools like AppDynamics, Instana for monitoring and alerting. Knowledge in technologies including Apache Solr, MySQL, Mongo, Zookeeper, RabbitMQ, Pentaho etc. Knowledge with Cloud platform including AWS and GCP are added advantage. Ability to identify and automate recurring tasks for better productivity. Ability to understand, implement industry standard security solutions. Experience in implementing Auto scaling, DR, HA, Multi-region with best practices is added advantage. Ability to work under pressure, managing expectations from various key stakeholders. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient prog Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional / nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Summary We are looking for a highly skilled Big Data & ETL Tester to join our data engineering and analytics team. The ideal candidate will have strong experience in PySpark, SQL, and Python, with a deep understanding of ETL pipelines, data validation, and cloud-based testing on AWS. Familiarity with data visualization tools like Apache Superset or Power BI is a strong plus You will work closely with our data engineering team to ensure data availability, consistency, and quality across complex data pipelines, and help transform business requirements into robust data testing frameworks. Key Responsibilities • Collaborate with big data engineers to validate data pipelines and ensure data integrity across ingestion, processing, and transformation stages. • Write complex PySpark and SQL queries to test and validate large-scale datasets. • Perform ETL testing, covering schema validation, data completeness, accuracy, transformation logic, and performance testing. • Conduct root cause analysis of data issues using structured debugging approaches. • Build automated test scripts in Python for regression, smoke, and end-to-end data testing. • Analyze large datasets to track KPIs and performance metrics supporting business operations and strategic decisions. • Work with data analysts and business teams to translate business needs into testable data validation frameworks. • Communicate testing results, insights, and data gaps via reports or dashboards (Superset/Power BI preferred). • Identify and document areas of improvement in data processes and advocate for automation opportunities. • Maintain detailed documentation of test plans, test cases, results, and associated dashboards. Required Skills and Qualifications 2+ years of experience in big data testing and ETL testing. • Strong hands-on skills in PySpark, SQL, and Python. • Solid experience working with cloud platforms, especially AWS (S3, EMR, Glue, Lambda, Athena, etc.). • Familiarity with data warehouse and lakehouse architectures. • Working knowledge of Apache Superset, Power BI, or similar visualization tools. • Ability to analyze large, complex datasets and provide actionable insights. • Strong understanding of data modeling concepts, data governance, and quality frameworks. • Experience with automation frameworks and CI/CD for data validation is a plus Preferred Qualifications • Experience with Airflow, dbt, or other data orchestration tools. • Familiarity with data cataloging tools (e.g., AWS Glue Data Catalog). • Prior experience in a product or SaaS-based company with high data volume environments. Why Join Us? • Opportunity to work with cutting-edge data stack in a fast-paced environment. • Collaborate with passionate data professionals driving real business impact. • Flexible work environment with a focus on learning and innovation

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Minimum 6 years of experience in performance testing using JMeter and Apache tools. Strong expertise in scripting, test execution, result analysis, and performance bottleneck identification.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Haryana, India

Remote

At GKM IT, we build high-performance, scalable technology that drives real business impact—and we’re looking for a Python Engineer - Lead to help us push those boundaries even further. In this role, you won’t just write code—you’ll architect systems, drive best practices, and lead by example. You’ll take the lead on microservices, system integrations, and performance-critical APIs while collaborating across teams to bring solutions to life. If you’re passionate about writing elegant, efficient Python code and love building systems that scale beautifully, this role is tailor-made for you. Requirements 7+ years of hands-on Python development experience Proven experience designing and leading scalable backend systems Expert knowledge of Python and at least one framework (e.g., Django, Flask) Familiarity with ORM libraries and server-side templating (Jinja2, Mako, etc.) Strong understanding of multi-threading, multi-process, and event-driven programming Proficient in user authentication, authorization, and security compliance Skilled in frontend basics: JavaScript, HTML5, CSS3 Experience designing and implementing scalable backend architectures and microservices Ability to integrate multiple databases, data sources, and third-party services Proficient with version control systems (Git) Experience with deployment pipelines, server environment setup, and configuration Ability to implement and configure queueing systems like RabbitMQ or Apache Kafka Write clean, reusable, testable code with strong unit test coverage Deep debugging skills and secure coding practices ensuring accessibility and data protection compliance Optimize application performance for various platforms (web, mobile) Collaborate effectively with frontend developers, designers, and cross-functional teams Lead deployment, configuration, and server environment efforts Benefits We don’t just hire employees—we invest in people. At GKM IT, we’ve designed a benefits experience that’s thoughtful, supportive, and actually useful. Here’s what you can look forward to: Top-Tier Work Setup You’ll be equipped with a premium MacBook and all the accessories you need. Great tools make great work. Flexible Schedules & Remote Support Life isn’t 9-to-5. Enjoy flexible working hours, emergency work-from-home days, and utility support that makes remote life easier. Quarterly Performance Bonuses We don’t believe in waiting a whole year to celebrate your success. Perform well, and you’ll see it in your pay check—quarterly. Learning is Funded Here Conferences, courses, certifications—if it helps you grow, we’ve got your back. We even offer a dedicated educational allowance. Family-First Culture Your loved ones matter to us too. From birthday and anniversary vouchers (Amazon, BookMyShow) to maternity and paternity leaves—we’re here for life outside work. Celebrations & Gifting, The GKM IT Way Onboarding hampers, festive goodies (Diwali, Holi, New Year), and company anniversary surprises—it’s always celebration season here. Team Bonding Moments We love food, and we love people. Quarterly lunches, dinners, and fun company retreats help us stay connected beyond the screen. Healthcare That Has You Covered Enjoy comprehensive health insurance for you and your family—because peace of mind shouldn’t be optional. Extra Rewards for Extra Effort Weekend work doesn’t go unnoticed, and great referrals don’t go unrewarded. From incentives to bonuses—you’ll feel appreciated.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies