Leewayhertz Technologies

16 Job openings at Leewayhertz Technologies
Senior Data Engineer Gurugram 9 - 14 years INR 11.0 - 16.0 Lacs P.A. Work from Office Full Time

We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze \u2192 silver \u2192 gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. ProficientinversioncontrolandCI / CDusingGitHub , AzureDevOps , CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills: Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understandingof AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information: Bachelor in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, orany other protected status. We encourage a diverse range of applicants.

Senior Data Engineer Gurugram,Delhi / NCR 7 - 12 years INR 15.0 - 30.0 Lacs P.A. Work from Office Full Time

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Senior Data Engineer Gurugram 7 - 12 years INR 15.0 - 30.0 Lacs P.A. Hybrid Full Time

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Motion Graphic Designer Gurugram 2 - 4 years INR 3.0 - 4.0 Lacs P.A. Work from Office Full Time

[{"Remote_Job":false , "Posting_Title":"Motion Graphic Designer" , "Is_Locked":false , "City":"Gurgaon" , "Industry":"IT Services","Job_Description":" Job Summary: We are seeking a talented and creative Motion Video Designer with 2\u20134 years of experience in motion graphics and video production. You will be responsible for creating high-quality explainer videos, animations, and product walkthroughs that simplify and visually communicate complex AI and technical concepts. This role requires a strong blend of storytelling, design, and technical execution, with a focus on clarity, branding, and audience engagement. Responsibilities Design and produce animated videos, including explainer videos, product demos, tutorials, and promotional content. Collaborate with product managers, marketing, and AI/tech teams to conceptualize motion content based on technical briefs. Translate complex and abstract concepts into simple, engaging visual stories. Develop storyboards and visual flow that align with messaging goals. Integrate voice-overs, subtitles, music, and visual effects into the final output. Ensure brand consistency, professional quality, and timely delivery of video projects. Stay up-to-date with motion design trends, new tools, and techniques. Requirements Essential Skills: Job Technical Skills: Strong proficiency in Adobe After Effects (including plugins like Element 3D, Duik, or Trapcode). Proficient in Adobe Creative Suite (Photoshop, Illustrator) and Figma. Experience with 2D/3D animation, compositing, and visual effects. Working knowledge of audio syncing, voiceover integration, and video rendering formats (e.g., MP4, HD1080p). Content Creation Skills: Ability to create technical and informative animations for software products. Experience with product explainers, corporate presentations, and tutorial-style videos. Capability to visualize user journeys and product features through clear animations. Communication Skills: Strong command of written and verbal English. Able to interpret briefs, explain creative choices, and collaborate with non-design teams effectively. Personal Strong attention to detail and visual storytelling. Creativity paired with a problem-solving mindset. Ability to work independently and meet deadlines. Team-oriented attitude with openness to feedback. Preferred Skills: Job Experience in creating storyboards. Exposure to AI-powered video tools or plugins (e.g., Runway ML, Pika). Understanding of SaaS platforms and UI/UX motion principles. Personal Passion for innovation and digital design. Curious to learn about new tech tools and motion trends. Enthusiastic about simplifying complex content for wider audiences. Proactive in acquiring new knowledge and staying updated with industry trends. Other Relevant Information Bachelor degree or diploma in Animation, Multimedia Design, Visual Communication, Graphic Design, Fine Arts, or a related field. 2 to 4 years of professional experience in motion graphics, animation, or video production. Strong portfolio showcasing animated explainer videos, product demos, and storytelling ability in a technical or digital product context. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.

Senior Data Engineer Gurugram 9 - 12 years INR 14.0 - 24.0 Lacs P.A. Remote Full Time

We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines with a strong focus on time series forecasting and upsert-ready architectures. This role requires end-to-end ownership of the data lifecycle, from ingestion to partitioning, versioning, and BI delivery. The ideal candidate must be highly proficient in AWS data services, PySpark, versioned storage formats like Apache Hudi/Iceberg, and must understand the nuances of data quality and observability in large-scale analytics systems. Role & responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-friendly ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modelling. Optimize Athena datasets with partitioning, CTAS queries, and metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate robust data quality checks using custom logs, AWS CloudWatch, or other DQ tooling. Design and manage a forecast feature registry with metrics versioning and traceability. Collaborate with BI and business teams to finalize schema design and deliverables for dashboard consumption. Preferred candidate profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and partition strategies. Working knowledge of Apache Hudi, Iceberg, or Delta Lake for versioned ingestion. Experience in S3 metadata tagging and scalable data lake design patterns. Expertise in feature engineering and forecasting dataset preparation (lags, trends, windows). Proficiency in Git-based workflows (Bitbucket), CI/CD, and deployment automation. Strong understanding of time series KPIs, such as revenue forecasts, occupancy trends, or demand volatility. Data observability best practices including field-level logging, anomaly alerts, and classification tagging. Experience with statistical forecasting frameworks such as Prophet, GluonTS, or related libraries. Familiarity with Superset or Streamlit for QA visualization and UAT reporting. Understanding of macroeconomic datasets (USDA, Circana) and third-party data ingestion. Independent, critical thinker with the ability to design for scale and evolving business logic. Strong communication and collaboration with BI, QA, and business stakeholders. High attention to detail in ensuring data accuracy, quality, and documentation. Comfortable interpreting business-level KPIs and transforming them into technical pipelines.

Trainee Data Engineer gurugram 0 years INR 3.75 - 4.0 Lacs P.A. Work from Office Full Time

We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities: Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Required Candidate profile: Bachelor's degree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Apply Link - https://leewayhertz.zohorecruit.in/jobs/Careers/32567000019403095/Trainee-Data-Engineer?source=CareerSite LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.

Python Technical Lead gurugram 6 - 11 years INR 20.0 - 30.0 Lacs P.A. Remote Full Time

Job Description: We are seeking a Technical Lead with over 6+ years of experience in backend development and cloud-native applications. The ideal candidate should possess deep expertise in Python (Django/FastAPI), robust database design, and modern DevOps practices on AWS. This role combines hands-on backend development with leadership responsibilities including team mentorship, architecture design, code quality enforcement, CI/CD setup, and secure, scalable backend deployment strategies. Role & responsibilities Architect and develop scalable backend services using Django and FastAPI (Python). Design and optimize database schemas for both MongoDB (NoSQL) and PostgreSQL. Implement authentication, authorization, and API security mechanisms. Hands-on experience with Javascript. Manage and deploy backend applications on AWS, including EC2, ECS, RDS, Lambda, and S3. Set up and manage CI/CD pipelines using tools like AWS CodePipeline and Jenkins. Lead backend code reviews to enforce quality, performance, and security standards. Mentor backend engineers through pair programming, 1:1 sessions, and technical feedback. Manage version control and branching strategies to ensure clean, stable backend releases. Handle production deployments, ensuring rollback strategies and zero-downtime practices. Preferred candidate profile Experience with microservices architecture and containerization tools (Docker/Kubernetes). Knowledge of asynchronous programming, caching, and API performance optimization. Hands-on experience with Javascript. Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK, or CloudWatch. Understanding of serverless architecture and event-driven systems (AWS Lambda, SNS/SQS). • Exposure to infrastructure as code (IaC) using Terraform or CloudFormation. Experience in backend unit and integration testing frameworks (e.g., PyTest). Familiarity with API documentation tools (e.g., Swagger/OpenAPI). Awareness of data privacy and compliance regulations (e.g., GDPR, HIPAA).

Trainee Data Engineer gurugram 1 - 6 years INR Not disclosed Work from Office Internship

[{"Remote_Job":false , "Posting_Title":"Trainee Data Engineer" , "Is_Locked":false , "City":"Gurugram , Haryana","Industry":"IT Services","Job_Description":" We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Requirements Essential Skills Job Basic knowledge of SQL and at least one programming language (e.g., Python, Scala). Understanding of databases and data concepts. Familiarity with cloud platforms (AWS, Azure, GCP) will be an added advantage. Good analytical and problem-solving skills. Basic Knowledge of data visualization tools (e.g., Tableau, Power BI). Personal Strong communication skills. Eagerness to learn and adapt to new technologies. Ability to work well in a team environment. Positive attitude and attention to detail. Preferred Skills Job Exposure to data pipelines, ETL concepts, or data warehousing (academic/project level). Basic knowledge of data visualization tools (e.g., Excel, Power BI, Tableau). Personal Proactive, self-motivated, and enthusiastic learner. Ability to take feedback and apply it effectively. Strong sense of responsibility and ownership for assigned tasks. Other Relevant Information Bachelordegree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Benefits LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. ","Work_Experience":"0-1 year","Job_Type":"Full time" , "Job_Opening_Name":"Trainee Data Engineer" , "Number_of_Positions":"2" , "State":"Haryana" , "Country":"India" , "Zip_Code":"122001" , "id":"32567000019403095" , "Publish":true , "Date_Opened":"2025-08-19" , "Keep_on_Career_Site":false}]

Senior Data Engineer (Data Lake, Forecasting & Governance) gurugram 9 - 12 years INR 15.0 - 30.0 Lacs P.A. Remote Full Time

We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines , with a strong focus on time series forecasting , upsert-ready architectures , and enterprise-grade data governance . This role demands end-to-end ownership of the data lifecycle from ingestion to partitioning, versioning, QA, lineage tracking, and BI delivery. The ideal candidate will be highly proficient in AWS data services , PySpark , and versioned storage formats such as Apache Hudi or Iceberg . A strong understanding of data quality , observability , governance , and metadata management in large-scale analytical systems is critical. Roles & Responsibilities Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena. Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data. Build versioned and upsert-ready ETL pipelines using Apache Hudi or Iceberg. Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modeling. Optimize Athena datasets with partitioning, CTAS queries, and S3 metadata tagging. Implement S3 lifecycle policies, intelligent file partitioning, and audit logging for performance and compliance. Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs. Integrate data quality frameworks such as Great Expectations, custom logs, and AWS CloudWatch for field-level validation and anomaly detection. Apply data governance practices using tools like OpenMetadata or Atlan, enabling lineage tracking, data cataloging, and impact analysis. Establish QA automation frameworks for pipeline validation, data regression testing, and UAT handoff. Collaborate with BI, QA, and business teams to finalize schema design and deliverables for dashboard consumption. Ensure compliance with enterprise data governance policies and enable discovery and collaboration through metadata platforms. Preferred Candidate Profile 9-12 years of experience in data engineering. Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue, Data Catalog. Strong command over PySpark, dbt-core, CTAS query optimization, and advanced partition strategies. Proven experience with versioned ingestion using Apache Hudi, Iceberg, or Delta Lake. Experience in data lineage, metadata tagging, and governance tooling using OpenMetadata, Atlan, or similar platforms. Proficiency in feature engineering for time series forecasting (lags, rolling windows, trends). Expertise in Git-based workflows, CI/CD, and deployment automation (Bitbucket or similar). Strong understanding of time series KPIs: revenue forecasts, occupancy trends, demand volatility, etc. Knowledge of statistical forecasting frameworks (e.g., Prophet, GluonTS, Scikit-learn). Experience with Superset or Streamlit for QA visualization and UAT testing. Experience building data QA frameworks and embedding data validation checks at each stage of the ETL lifecycle. Independent thinker capable of designing systems that scale with evolving business logic and compliance requirements. Excellent communication skills for collaboration with BI, QA, data governance, and business stakeholders. High attention to detail, especially around data accuracy, documentation, traceability, and auditability.

Penetration Tester gurugram 4 - 8 years INR 8.0 - 14.0 Lacs P.A. Remote Full Time

As a Penetration Tester, you will be instrumental in safeguarding our AI platforms by identifying vulnerabilities and simulating real-world attacks. Your expertise will help fortify our systems, ensuring the integrity and trustworthiness of our AI solutions. Role & responsibilities Conduct Penetration Tests: Perform comprehensive penetration testing on AI models, APIs, cloud infrastructures, and associated systems to uncover security weaknesses. AI-Specific Threat Analysis: Identify and assess vulnerabilities unique to AI systems, including model inversion, data poisoning, and adversarial attacks. Tool Development: Create and maintain custom scripts and tools to automate testing processes and improve efficiency. Reporting: Document findings in detailed reports, providing actionable recommendations to mitigate identified risks. Collaboration: Work closely with development, data science, and DevOps teams to integrate security best practices throughout the AI product lifecycle. Stay Updated: Keep abreast of the latest cybersecurity threats, penetration testing techniques, and AI security research. Job Penetration Testing Tools: Proficiency with tools like Kali Linux, Burp Suite, Metasploit, Nmap, and Wireshark. Programming and Scripting: Strong skills in Python, Bash, or PowerShell for automating tasks and developing custom testing tools. Networking and Protocols: In-depth understanding of TCP/IP, DNS, HTTP/HTTPS, and other networking protocols. Operating Systems: Experience with Windows, Linux, and macOS environments. Cloud Security: Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and their security configurations. AI and Machine Learning: Basic understanding of machine learning frameworks (e.g., TensorFlow, PyTorch) and AI model architectures Preferred candidate profile Advanced Threat Analysis: Experience in identifying and mitigating sophisticated cyber threats. Social Engineering: Knowledge of social engineering tactics and their application in penetration testing. Security Frameworks: Familiarity with OWASP, NIST, and ISO/IEC 27001 standards. Secure Coding Practices: Understanding of secure coding standards and the ability to perform code reviews.

Senior Data Engineer gurugram 5 - 10 years INR 16.0 - 27.5 Lacs P.A. Remote Full Time

Role & responsibilities As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines and lakehouse architectures on AWS. You will ensure data availability, quality, lineage, and governance across analytical and operational platforms. Your expertise will enable scalable, secure, and cost-effective data solutions that power advanced analytics and business intelligence. Responsibilities : Implement and manage S3 (raw, staging, curated zones), Glue Catalog, Lake Formation, and Iceberg/Hudi/Delta Lake for schema evolution and versioning. Develop PySpark jobs on Glue/EMR, enforce schema validation, partitioning, and scalable transformations. Build workflows using Step Functions, EventBridge, or Airflow (MWAA), with CI/CD deployments via CodePipeline & CodeBuild. Apply schema contracts, validations (Glue Schema Registry, Deequ, Great Expectations), and maintain lineage/metadata using Glue Catalog or third-party tools (Atlan, OpenMetadata, Collibra). Enable Athena and Redshift Spectrum queries, manage operational stores (DynamoDB/Aurora), and integrate with OpenSearch for observability. Design efficient partitioning/bucketing strategies, adopt columnar formats (Parquet/ORC), and implement spot instance usage/bookmarking. Enforce IAM-based access policies, apply KMS encryption, private endpoints, and GDPR/PII data masking. Prepare Gold-layer KPIs for dashboards, forecasting, and customer insights with QuickSight, Superset, or Metabase. Partner with analysts, data scientists, and DevOps to enable seamless data consumption and delivery. Preferred candidate profile Hands-on expertise with AWS data stack (S3, Glue, Lake Formation, Athena, Redshift, EMR, Lambda). Strong programming skills in PySpark & Python for ETL, scripting, and automation. Proficiency in SQL (CTEs, window functions, complex aggregations). Experience in data governance, quality frameworks (Deequ, Great Expectations). Knowledge of data modeling, partitioning strategies, and schema enforcement. Familiarity with BI integration (QuickSight, Superset, Metabase). Benefits This role offers the flexibility of working remotely in India.

Head Digital Marketing gurugram 10 - 12 years INR 30.0 - 45.0 Lacs P.A. Remote Full Time

Job description We are seeking a Digital Marketing Head with 10+ years of experience in driving end-to-end digital strategies and performance marketing across global B2B tech ecosystems. The ideal candidate will bring a deep understanding of AI/Generative AI industry dynamics and proven expertise in building brand authority, lead generation funnels, and high-impact digital campaigns. This role requires a strategic thinker who is also execution focused & someone who can own ZBrains digital presence across paid, owned, and earned channels while leading a high-performing team. Key responsibilities include growth marketing, SEO/SEM, content strategy, analytics-driven decision-making, marketing automation, and positioning ZBrain as a thought leader in enterprise AI transformation. Role & responsibilities Digital Strategy Development - You will design and implement robust digital marketing strategies tailored to ZBrains business goals and technological vision. These strategies will span multiple channels and are aimed at strengthening brand awareness, demand generation, and ZBrain’s thought leadership within the AI domain. Campaign Management and Execution - Lead end-to-end execution of digital marketing campaigns across platforms such as Google Ads, LinkedIn, and Twitter/X. You will continuously optimize these campaigns using performance analytics to improve engagement, lead quality, and overall conversion rates. Content Copywriting - Craft compelling marketing copy for a range of digital assets including advertisements, landing pages, email campaigns, and social media. A key focus will be on translating complex AI and Generative AI concepts into clear, persuasive messaging that resonates with both technical and non-technical audiences. Video Content Creation - Conceptualize and script promotional videos, product demos, and customer success stories. You’ll collaborate with designers and video teams to ensure all content is visually engaging and aligned with the brand’s tone and positioning. Social Media Marketing - Develop and execute platform-specific social media strategies, particularly for LinkedIn and Twitter/X. Your goal will be to build an active, engaged audience through curated content that showcases ZBrain’s innovation and impact. PPC and Paid Advertising - Manage and continually refine paid advertising efforts across channels such as Google and LinkedIn. You will focus on driving measurable outcomes, ensuring high return on investment and quality lead acquisition. Team Leadership and Development - Lead and mentor a high-performing digital marketing team, fostering a collaborative, data-driven, and innovative environment. You’ll be responsible for aligning team outputs with strategic objectives and nurturing talent development. Market Insights and Competitive Analysis - Conduct in-depth research on industry trends, audience preferences, and competitive activities. Your insights will directly influence messaging strategies, content planning, and campaign adjustments to maintain ZBrain’s market edge. Cross-Functional Collaboration - Work closely with product, content, and sales teams to plan and execute go-to-market campaigns. You will also play a pivotal role in developing strategic marketing collateral such as pitch decks, brochures, and case studies to support business development efforts. Preferred candidate profile 10+ years in digital marketing with 3+ years in a leadership role (AI, Generative AI, SaaS). Expertise in digital copywriting for ads, emails, landing pages. Experience conceptualizing and delivering video marketing assets. Track record of running high-ROI PPC and social campaigns. Proficient in Google Analytics, Clarity, and similar tools. Understanding of AI/Gen AI technologies and market trend. Exceptional copywriting and storytelling skills tailored to AI/Generative AI solutions. Expertise in video content creation, from concept to execution, with a focus on engaging audiences. Strong leadership and collaboration skills, with experience working in cross-functional teams. Passion for innovation and staying ahead of trends in AI and digital marketing. Strategic thinker with data-driven execution skills. Excellent storytelling and copywriting ability. B2B marketing experience in AI-driven product environments. Familiarity with marketing automation and CRM tools (e.g., HubSpot, Salesforce). SEO and keyword optimization understanding is a plus. Strong leadership and collaboration capabilities.

Data Engineer gurugram 3 - 5 years INR 15.0 - 25.0 Lacs P.A. Remote Full Time

Role & responsibilities Design, develop, and optimize data pipelines using PySpark and AWS services. Implement and manage data workflows, ETL processes, and schema validations. Ensure data quality, integrity, and consistency by applying validation frameworks (e.g., Deequ, Great Expectations). Work with data in different zones (raw, staging, curated) and implement partitioning, schema evolution, and governance best practices. Collaborate with analysts, data scientists, and cross-functional teams to deliver reliable and consumable datasets. Support data lineage, metadata management, and compliance requirements. Strong hands-on experience with PySpark & Python for ETL, data transformation, and automation. Proficiency in SQL (joins, window functions, aggregations). Experience with AWS data stack (S3, Glue, Athena, Redshift, EMR, or similar). Knowledge of data quality frameworks (Deequ, Great Expectations) and data governance principles. Good understanding of data modeling, partitioning strategies, and schema enforcement. Preferred candidate profile Exposure to workflow orchestration tools (Airflow, Step Functions, or similar). Experience with BI/reporting integrations (QuickSight, Superset, or Metabase). Familiarity with real-time data ingestion (Kafka, Kinesis, MSK).

Trainee Data Engineer gurugram 0 years INR 3.75 - 4.0 Lacs P.A. Work from Office Full Time

We are looking for a motivated and enthusiastic Trainee Data Engineer to join our Engineering team. This is an excellent opportunity for recent graduates to start their career in data engineering, work with modern technologies, and learn from experienced professionals. The candidate should be eager to learn, curious about data, and willing to contribute to building scalable and reliable data systems. Responsibilities: Understand and align with the values and vision of the organization. Adhere to all company policies and procedures. Support in developing and maintaining data pipelines under supervision. Assist in handling data ingestion, processing, and storage tasks. Learn and contribute to database management and basic data modeling. Collaborate with team members to understand project requirements. Document assigned tasks, processes, and workflows. Stay proactive in learning new tools, technologies, and best practices in data engineering. Required Candidate profile: Bachelor's degree in Computer Science, Information Technology, or related field. Fresh graduates or candidates with up to 1 year of experience are eligible. Apply Link - https://leewayhertz.zohorecruit.in/jobs/Careers/32567000019933313/Trainee-Data-Engineer?source=CareerSite LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants.

Trainee QA gurugram 0 - 1 years INR Not disclosed Work from Office Internship

[{"Remote_Job":false , "Posting_Title":"Trainee QA" , "Is_Locked":false , "City":"Gurgaon Kty..Industry":"IT Services.Job_Description":" We are looking for a motivated and detail-oriented Trainee QA (Quality Assurance) to join our team. This is an excellent opportunity for fresh graduates or early-career professionals to gain hands-on experience in software testing within an AI-driven environment. The ideal candidate should possess a keen analytical mindset, a passion for quality, and the ability to work in a fast-paced team-oriented setting. Responsibilities Understand the values and vision of the organization Protect the Intellectual Property Adhere to all the policies and procedures Assist in executing test plans, test cases, and test scripts for AI-based applications and software. Identify, document, and track software defects and inconsistencies. Collaborate with developers, data scientists, and product managers to ensure high-quality software releases. Perform functional, regression, performance, and automated testing as required. Analyze test results and provide feedback on usability, performance, and reliability. Support the development of automation scripts to improve testing efficiency. Stay updated on the latest trends and best practices in AI software testing and QA methodologies. Requirements Essential Skills Job Basic understanding of software testing concepts and methodologies. Familiarity with programming languages such as Python, Java, or JavaScript is a plus. Knowledge of test automation tools (e.g., Selenium, JUnit, TestNG) is desirable. Experience with bug tracking tools like Jira or TestRail is a plus. Good analytical and communication skills. Awareness of AI/ML concepts and their impact on software testing is advantageous. Personal Collaborative approach to effectively present and advocate for quick design solutions. A proactive approach to problem solving, with a focus on delivering exceptional customer satisfaction. Ability to work independently and collaboratively in a team environment. Eagerness to learn and grow within a fast-evolving AI ecosystem. Preferred Skills Job Basic understanding of software testing concepts and methodologies. Familiarity with programming languages such as Python, Java, or JavaScript is a plus. Knowledge of test automation tools (e.g., Selenium, JUnit, TestNG) is desirable. PersonaL Collaborative approach to effectively present and advocate for quick design solutions. A proactive approach to problem-solving, with a focus on delivering exceptional customer satisfaction. Ability to work independently and collaboratively in a team environment. Eagerness to learn and grow within a fast-evolving AI ecosystem. Other Relevant Information Bachelor or Master degree in Computer Science, Data Science, or a related field. Benefits We are an equal opportunity employer and do not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. .Work_Experience":"0-1 year.Job_Type":"Full time" , "Job_Opening_Name":"Trainee QA" , "Number_of_Positions":"4" , "State":"Haryana" , "Country":"India" , "Zip_Code":"122001" , "id":"32567000020086379" , "Publish":true , "Date_Opened":"2025-09-30" , "Keep_on_Career_Site":false}]

QA Lead- Remote, India gurugram 8 - 12 years INR 25.0 - 30.0 Lacs P.A. Remote Full Time

Role & responsibilities Understand the values and vision of the organization Protect the Intellectual Property Adhere to all the policies and procedures Lead and manage the QA team, ensuring the delivery of high-quality AI products through comprehensive manual testing. Develop and implement test strategies, plans, and processes for AI applications. Collaborate with product managers, developers, and other stakeholders to understand requirements and define testing objectives. Perform end-to-end manual testing for AI models, APIs, and user interfaces. Create and execute detailed test cases, test plans, and test scripts. Identify, document, and track defects, ensuring proper resolution and retesting. Ensure thorough regression, functional, integration, and system testing is performed. Provide mentorship and guidance to junior QA engineers. Analyze test results and prepare detailed reports on software quality. Continuously work to improve the QA process and ensure adherence to industry standards and best practices. Stay updated on the latest trends and tools in QA and AI technologies. Expertise in designing complex, data-rich applications or AI-powered products. Experience in testing complex, data-driven systems, and AI algorithms. Hands-on experience with testing APIs, databases, and front-end/back-end systems. Excellent knowledge of software testing methodologies, processes, and tools. Experience with defect tracking tools such as JIRA or similar platforms. Strong understanding of AI/ML products and the specific challenges of testing AI applications. Preferred candidate profile Testing tools for API, database, Front/back-end systems and AI Algorithms.