Job Title: Data Engineer (35 Years Experience) Location: Gurgaon, Pune, Bangalore, Hyderabad Job Summary : We are seeking a skilled and motivated Data Engineer with 3 to 5 years of experience to join our growing team The ideal candidate will have hands-on expertise in building robust, scalable data pipelines, working with modern data platforms, and enabling data-driven decision-making across the organization Youll work closely with data scientists, analysts, and engineering teams to build and maintain efficient data infrastructure and toolin Key Responsibiliti es: Design, develop, and maintain scalable ETL/ELT pipelines to support analytics and product use ca ses Collaborate with data analysts, scientists, and business stakeholders to gather requirements and translate them into data soluti ons Manage data integrations from various internal and external data sour ces Optimize data workflows for performance, cost-efficiency, and reliabil ity Build and maintain data models and data warehouses using industry best practi ces Monitor, troubleshoot, and improve existing data pipeli nes Implement data quality frameworks and ensure data governance standards are follo wed Contribute to documentation, code reviews, and knowledge sharing within the t eam Required Skills & Qualificati ons: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related f ield 35 years of experience as a Data Engineer or in a similar data-focused role Strong command of SQL and proficiency in Py thon Good Engineering pract ices Experience with data pipeline orchestration tools such as Apache Airflow or equiva lent Hands-on experience with cloud data platforms (AWS/GCP/Azure) and services such as S3, Redshift, BigQuery, or Azure Data Lake Experience in data warehousing concepts and tools like Snowflake, Redshift, databr icks,Familiarity with version control tools such as Git Strong analytical and communication sk ills Preferred Qualificat ions: Exposure to big data tools and frameworks such as Spark, Hadoop, or Kafka Experience with containerization (Docker/Kubern etes) Familiarity with CI/CD pipelines and automation in data engine ering Awareness of data security, privacy, and compliance princ iples What We Offer: A collaborative and inclusive work envir onment Opportunities for continuous learning and career growth petitive compensation and be nefits Flexibility to work from any of our offices in Gurgaon, Pune, Bangalore, or Hy derabad
About Company:. Founded in the year 2017, CoffeeBeans specializes in offering high end consulting services in technology, product, and processes. We help our clients attain significant improvement in quality of delivery through impactful product launches, process simplification, and help build competencies that drive business outcomes across industries. The company uses new-age technologies to help its clients build superior products and realize better customer value. We also offer data-driven solutions and AI-based products for businesses operating in a wide range of product categories and service domains.. Experience: 3 6 Years. Location: Bangalore/Mumbai/Hyderabad. Employment Type: Full-time. Mode: WFO (5 Days). Job Summary:. We are seeking a highly skilled and experienced Java Backend Developer with 3–6 years of hands-on experience in designing and implementing scalable backend systems. The ideal candidate should be proficient in both SQL and NoSQL databases, have strong experience in building microservices using Spring Boot, and demonstrate a deep understanding of multithreading and concurrency in Java.. Key Responsibilities:. Design, develop, test, and maintain robust and scalable backend services and APIs using Java and Spring Boot.. Develop microservices architecture-based solutions with high performance and reliability.. Work with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra) databases.. Optimize application performance through multithreading and concurrency management.. Collaborate with front-end developers, DevOps, and QA teams for seamless integration and deployment.. Write clean, maintainable, and well-documented code following best practices and coding standards.. Participate in code reviews and provide constructive feedback to team members.. Troubleshoot and resolve issues in development, test, and production environments.. Required Skills:. 3–6 years of professional experience in backend development using Java.. Strong hands-on experience with Spring Boot and microservices architecture.. Proficiency in SQL and NoSQL databases.. Strong understanding and application of multithreading, concurrency, and performance optimization.. Good knowledge of RESTful API design and implementation.. Experience with version control systems like Git.. Familiarity with CI/CD tools and containerization (Docker/Kubernetes) is a plus.. Strong problem-solving skills and a proactive attitude.. Show more Show less
Experience: 4–6 Years. Employment Type: Full-time. Mode: WFO (5 Days). Job Summary:. We are seeking a highly skilled and experienced Java Backend Developer with 4–6 years of hands-on experience in designing and implementing scalable backend systems. The ideal candidate should be proficient in both SQL and NoSQL databases, have strong experience in building microservices using Spring Boot, and demonstrate a deep understanding of multithreading and concurrency in Java.. Key Responsibilities:. Design, develop, test, and maintain robust and scalable backend services and APIs using Java and Spring Boot.. Develop microservices architecture-based solutions with high performance and reliability.. Work with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra) databases.. Optimize application performance through multithreading and concurrency management.. Collaborate with front-end developers, DevOps, and QA teams for seamless integration and deployment.. Write clean, maintainable, and well-documented code following best practices and coding standards.. Participate in code reviews and provide constructive feedback to team members.. Troubleshoot and resolve issues in development, test, and production environments.. Required Skills:. 4–6 years of professional experience in backend development using Java.. Strong hands-on experience with Spring Boot and microservices architecture.. Proficiency in SQL and NoSQL databases.. Strong understanding and application of multithreading, concurrency, and performance optimization.. Good knowledge of RESTful API design and implementation.. Experience with version control systems like Git.. Familiarity with CI/CD tools and containerization (Docker/Kubernetes) is a plus.. Strong problem-solving skills and a proactive attitude.. Show more Show less
Experience: 4 6 Years Location: Bangalore ( Hybrid) Shift :Night Shift Employment Type: Full-time About the Role: We are seeking a skilled and motivated Analytics Engineer with 46 years of experience to join our data team in Bangalore The ideal candidate will possess a strong mix of data engineering, analytics, and stakeholder collaboration skills You will play a key role in designing scalable data solutions and enabling data-driven decision-making across the organization Key Responsibilities: Collaborate with business and technical stakeholders to gather requirements and deliver analytics solutions Design and implement scalable data models using star schema and dimensional modeling approaches Develop and optimize ETL pipelines using Apache Spark for both batch and real-time processing(experience with Apache Pulsar is preferred) Write efficient, production-grade Python scripts and advanced SQL queries for data transformation and analysis Manage workflows and ensure data pipeline reliability using tools like Airflow, DBT, or similar orchestration frameworks Implement best practices in data quality, testing, and observability across all data layers Work with cloud-native data lakes/warehouses such as Redshift, BigQuery, Cassandra, and cloud storage platforms (S3, Azure Blob, GCS) Leverage relational databases such as PostgreSQL/MySQL for operational data tasks Nice to Have: Exposure to containerization technologies like Docker and Kubernetes for scalable deployment Experience working in cloud-native analytics ecosystems Required Skills: Strong experience in data modeling, ETL development, and data warehouse design Proven expertise in Python and SQL Hands-on experience with Apache Spark (ETL tuning), Airflow, DBT, or similar tools Practical knowledge of data quality frameworks, monitoring, and data observability Familiarity with both batch and streaming data architectures What We Offer: Opportunity to work on cutting-edge data platforms Collaborative and inclusive team culture Competitive salary and benefits Career growth in the modern data engineering space
Experience: 3-6 years Location: Bangalore Employment Type: Full-time Mode: WFO (Hybrid 3 Days) Key Responsibilities Infrastructure & Kubernetes Management Architect and manage enterprise-grade Kubernetes clusters (on-premise and/or cloud), Implement and manage container orchestration using Kubernetes with focus on high availability, scalability, and performance, Configure Kubernetes networking (CNI, Ingress), storage (CSI, Ceph, Longhorn), and custom resources (CRDs, Operators), Infrastructure as Code & GitOps Build and manage infrastructure using Terraform, Helm, and Ansible, Implement GitOps workflows using tools like ArgoCD, Monitoring & Observability Deploy and maintain observability stacks using Prometheus, Grafana, EFK/ELK Stack, and AlertManager, Define and implement monitoring strategies for performance, uptime, and incidents, CI/CD & Automation Design and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions, Develop automation scripts using Shell or Python for infrastructure tasks and deployment flows, Security & Best Practices Enforce Kubernetes security standards including RBAC, network policies, and container image scanning, Implement disaster recovery, backup strategies, and compliance-aligned configurations, Leadership & Collaboration Collaborate with development teams to align deployment strategies, Drive infrastructure best practices and create technical documentation, Must-Have Skills 3 6 years in DevOps or Infrastructure Engineering Exposure to multi-cloud environments (AWS, Azure, GCP) 3+ years working with Kubernetes in production Proficiency in Terraform, Helm, Ansible, ArgoCD Strong understanding of Kubernetes internals (networking, storage, controllers) Experience with monitoring tools (Prometheus, Grafana) and logging tools (ELK/EFK) Experience building secure and reliable CI/CD pipelines Hands-on scripting with Shell (Bash/Powershell), familiarity with Python/Go Good-to-Have Skills Experience with service mesh (Istio, Linkerd) Knowledge of container security tools (e g , Aqua, Twistlock) Experience with database operations or ML pipelines in Kubernetes
Experience: 6-9 year Location: Bangalore/Pune/Hyderabad Work mode: Hybrid Founded in the year 2017, CoffeeBeans specializes in offering high end consulting services in technology, product, and processes We help our clients attain significant improvement in quality of delivery through impactful product launches, process simplification, and help build competencies that drive business outcomes across industries The company uses new-age technologies to help its clients build superior products and realize better customer value We also offer data-driven solutions and AI-based products for businesses operating in a wide range of product categories and service domains As a Data Engineer, you will play a crucial role in designing and optimizing data solutions for our clients The ideal candidate will have a strong foundation in Python, experience with Databricks Warehouse SQL or a similar Spark-based SQL platform, and a deep understanding of performance optimization techniques in the data engineering landscape Knowledge of serverless approaches, Spark Streaming, Structured Streaming, Delta Live Tables, and related technologies is essential, What are we looking for Bachelor's degree in Computer Science, Engineering, or a related field, 6-9 year of experience as a Data Engineer, Proven track record of designing and optimizing data solutions, Strong problem-solving and analytical skills, Must haves Python: Proficiency in Python for data engineering tasks and scripting, Performance Optimization: Deep understanding and practical experience in optimizing data engineering performance, Serverless Approaches: Familiarity with serverless approaches in data engineering solutions, Good to have Databricks Warehouse SQL or Equivalent Spark SQL Platform: Hands-on experience with Databricks Warehouse SQL or a similar Spark-based SQL platform, Spark Streaming: Experience with Spark Streaming for real-time data processing, Structured Streaming: Familiarity with Structured Streaming in Apache Spark, Delta Live Tables: Knowledge and practical experience with Delta Live Tables or similar technologies, What will you be doing Design, develop, and maintain scalable and efficient data solutions, Collaborate with clients to understand data requirements and provide tailored solutions, Implement performance optimization techniques in the data engineering landscape, Work with serverless approaches to enhance scalability and flexibility, Utilize Spark Streaming and Structured Streaming for real-time data processing, Implement and manage Delta Live Tables for efficient change data capture,
Founded in the year 2017, CoffeeBeans specializes in offering high end consulting services in technology, product, and processes We help our clients attain significant improvement in quality of delivery through impactful product launches, process simplification, and help build competencies that drive business outcomes across industries The company uses new-age technologies to help its clients build superior products and realize better customer value We also offer data-driven solutions and AI-based products for businesses operating in a wide range of product categories and service domains What you will own Design and execute creative outreach plays that increase qualified meetings and keep the funnel healthy, Translate the sales plan into crisp ICPs and buying-committee personas; refine them with data, Build segmented prospect lists and keep pipeline data clean,complete, and current, Orchestrate multi-touch follow-ups (email, LinkedIn, calls) that nurture interest and accelerate deal velocity, Draft proposals, one-pagers, and pitch decks tailored to prospect needs; ensure on-brand quality and grammar, Produce concise market briefs & research, competitor snapshots, and meeting prep docs that sharpen sales arguments, Keep the sales-enablement library updated with the latest product, industry, and competitive intel, Qualifications & Experience Education: BCA/b-e /b-tech (CS, IT, or related) MBA preferred, Experience: 1-4 years in inside sales, lead generation, or SDR/BDR roles within tech or IT services, Tools: Proficient with modern CRMs (ZohoCRM/HubSpot), sales-engagement platforms (Apollo, Sales Navigator), and G-Suite for analysis & reporting, Industry: BFSI, Retail & Healthcare Skills: Strong analytical thinking; can translate data into action, Exceptional written and verbal English; impeccable grammar, Solid grasp of IT services, Cloud, Data & AI trends (or demonstrable eagerness to upskill fast)along with US Sales