Jobs
Interviews

64 SQR APX LLP

3 Job openings at 64 SQR APX LLP
Talent Acquisition Lead pimpri-chinchwad 2 - 5 years INR 3.0 - 4.0 Lacs P.A. Work from Office Full Time

We are looking for a proactive and experienced Talent Acquisition Lead to join our growing team. The ideal candidate will have hands-on expertise in managing the end-to-end recruitment process, including candidate sourcing, client coordination, ATS and assessment tool management, while effectively handling candidate objections and driving successful approach towards the company. Key Responsibilities: Lead and manage end-to-end talent acquisition processes across multiple functions and levels. Directly interact with clients and hiring managers to gather requirements, provide hiring status updates, and ensure a smooth communication process. Utilize Applicant Tracking System (ATS) and various assessment tools effectively for managing candidate pipelines, status updates and interview scheduling. Develop and execute innovative sourcing strategies through various channels including job portals, social media, professional networks, and referrals. Screen profiles and conduct preliminary interviews to evaluate technical skills, cultural fit, and career aspirations. Manage candidate objections effectively and maintain high levels of candidate engagement throughout the hiring cycle. Build and maintain strong talent pipelines for critical and recurring positions. Maintain hiring reports, dashboards, trackers and ensure data integrity on ATS process . Stay updated with market trends in IT hiring and recommend best practices. Key Requirements: Bachelors/Masters degree in HR, Business Administration, or Any bachelor’s degree + Recruitment certifications, or related field. 2-5 years of proven experience in Talent Acquisition role preferably in IT/Consulting/Managed services. Prior exposure in client coordination and requirement management. Working knowledge of ATS platforms and assessment tools like ( Job Portals, Headhunting , LinkedIn, Test Gorilla, Hacker Rank , or similar). Strong understanding of various sourcing techniques and employment trends. Proactive, target-driven, and organized with attention to detail. Strong connects with hiring pools, agencies, and external networks to source niche talent. Preferred Skills: Talent acquisition, assessment tools, hiring and recruitment. Experience hiring for niche skills such as AI/ML, GenAI, Cloud, DevOps, Cybersecurity, Salesforce, and CRM Platforms. Knowledge of market salary benchmarks and compensation structures. What We Offer: Opportunity to work with cutting-edge technology teams. A dynamic, growth-oriented work environment. Competitive compensation package .

Senior Splunk Data Engineer bengaluru 7 - 10 years INR 20.0 - 25.0 Lacs P.A. Hybrid Full Time

We are hiring for the position of Senior Splunk Data Engineer with 7-10 years of relevant experience in log analytics, observability engineering , and enterprise-level Splunk platform management . The ideal candidate will be responsible for designing, implementing, and maintaining Splunk-based monitoring and alerting solutions that support scalable infrastructure and mission-critical applications. This is a key engineering role where you will collaborate closely with security, DevOps, and SRE teams to ensure visibility, uptime, and operational intelligence across distributed systems. Key Responsibilities Design, architect, and implement scalable Splunk deployments in cloud or hybrid environments. Build and manage log ingestion pipelines using Splunk Universal Forwarders, HEC, syslog, and third-party connectors. Create and optimize SPL queries, dashboards, scheduled reports, and alerting mechanisms to support infrastructure and application monitoring. Automate log onboarding, parsing, field extractions, and tagging using scripts (Python, Shell). Integrate Splunk with cloud platforms such as AWS (CloudTrail, CloudWatch), Azure Monitor, and GCP Logging. Perform system tuning, indexing strategy planning, and data lifecycle management (retention/archival). Collaborate with developers and infrastructure engineers to onboard new services and improve observability coverage. Ensure security and compliance alignment using frameworks such as MITRE ATT&CK, NIST, and GDPR. Participate in incident response, root cause analysis, and continuous service improvement. Candidate Profile Education: Bachelors / Masters in Computer Science, IT, or a related field. Experience: 7 to 10 years total, with 5+ years of hands-on Splunk experience in enterprise environments. Splunk Expertise: Strong understanding of Splunk architecture, SPL, knowledge objects, dashboards, ITSI, and Enterprise Security (ES). Scripting: Good knowledge of Python and Shell scripting for automation and integrations. Cloud Exposure: Working experience with AWS, Azure, or GCP log ingestion and cloud monitoring services. DevOps/Automation Tools: Familiar with CI/CD pipelines, Docker/Kubernetes, Ansible or Terraform is a plus. Soft Skills: Strong analytical, troubleshooting, and communication skills. Ability to lead conversations with technical and non-technical stakeholders. Required Skills Splunk Enterprise / Cloud SPL (Search Processing Language) Data Onboarding and Field Extraction Python / Shell scripting AWS / Azure / GCP integration Infrastructure Monitoring SIEM & Security Analytics Log Forwarding (UF, HEC, Syslog) Dashboard & Alert Development ITSI (preferred) Performance Tuning & Capacity Planning Certifications (Preferred but not mandatory) Splunk Certified Power User / Admin / Architect AWS / Azure Fundamentals or Architect Certification

Senior Data Engineer Palantir bengaluru 7 - 12 years INR 25.0 - 30.0 Lacs P.A. Hybrid Full Time

About the Role: We are hiring a Senior Data Engineer with strong expertise in Palantir Foundry or Gotham to lead the design, development, and optimization of data engineering solutions across large-scale enterprise environments. The ideal candidate will have deep experience building secure, high-performance data pipelines using Python, ETL/ELT practices, distributed computing frameworks, and cloud platforms ( AWS ,Azure ,GCP)while working closely with DevOps, analytics, and business teams to enable data-driven decisions. Key Responsibilities: Architect, build, and optimize large-scale ETL/ELT pipelines using Palantir Foundry/Gotham . Integrate diverse data sources including SQL , NoSQL , APIs , and real-time streaming platforms (e.g., Kafka) into Palantirs ontology-driven models . Develop and customize ontologies , transforms , and operational workflows within Palantir to support business intelligence and analytics. Implement scalable, distributed processing using frameworks like Apache Spark , Hadoop , and other big data technologies for handling petabyte-scale datasets . Write efficient, production-grade Python scripts for data transformation, automation, workflow orchestration, and custom business logic. Optimize data workflows and platform performance through query tuning , caching , partitioning , and incremental data updates . Ensure robust data governance , lineage tracking , and enterprise-grade security (e.g., RBAC, encryption). Collaborate with DevOps and platform engineering teams to implement and maintain CI/CD pipelines and automated, scalable deployment processes . Maintain and version-control data assets and transformations using Git and DevOps best practices. Partner with data scientists, analysts, and stakeholders to translate complex business requirements into scalable data engineering solutions. Create and maintain comprehensive technical documentation , including data models, architecture diagrams, deployment workflows, and operational guides. Required Skills: Proven expertise in Palantir Foundry or Gotham , with deep knowledge of its ontology framework, code workbooks, and data pipeline architecture. Strong programming experience in Python and advanced SQL . Hands-on experience with ETL/ELT design , data transformation, and workflow automation. Proficiency with Apache Spark , Hadoop , or similar distributed data processing frameworks. Experience integrating data from relational and non-relational databases , streaming sources , and external APIs . Understanding of data modeling , ontology-driven design , and semantic layers . Solid grasp of CI/CD , DevOps practices , and deployment automation tools. Familiarity with cloud platforms (AWS, Azure, or GCP), containerization (Docker), and orchestration tools (Kubernetes, Airflow). Nice to Have: Experience with Foundry Ontology SDK , Object Explorer, and Python transforms in Palantir. Exposure to data cataloging , metadata management , and monitoring frameworks . Understanding of ML/AI pipeline integration within data platforms. Certifications in Palantir , cloud platforms , or big data technologies . Educational Qualifications: Bachelor’s or Master’s degree in Computer Science , Engineering , or a related field. Minimum 7-10 years of experience in data engineering , ETL , and data integration . Strong programming experience in Python (including Pandas , PySpark , NumPy , etc.). Hands-on expertise with Palantir Foundry and its ecosystem. Familiarity with big data technologies such as Hadoop , Spark , Kafka , etc. Excellent problem-solving and analytical skills. Effective communication skills and a collaborative mindset. Proven leadership or mentorship experience is a strong plus. Why Join Us? Be part of transformative data engineering projects using one of the most cutting-edge data platforms in the industry. Work in a fast-paced, collaborative environment with access to global teams and complex use cases. Hybrid work model that supports flexibility and productivity. Competitive salary and benefits with opportunities for growth and learning.