ArcOne AI

5 Job openings at ArcOne AI
AI Performance Engineer Pune,Maharashtra,India 7 years None Not disclosed On-site Full Time

Position Overview: The Performance Engineer will play a critical role in analyzing, optimizing, and scaling ArcOne’s data and AI systems, with a focus on revenue management. This role involves deep performance profiling across application, middleware, runtime, and infrastructure layers, developing advanced observability tools, and collaborating with cross-functional teams to meet stringent latency, throughput, and scalability goals. Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7+ years of software engineering experience, with a strong focus on performance or reliability engineering for high-scale distributed systems. Proven expertise in optimizing performance across one or more layers of the stack (e.g., database, networking, storage, application runtime, GC tuning, Python/Golang internals, GPU utilization). Hands-on experience with real-time and batch processing frameworks (e.g., Apache Kafka, Spark, Flink). Demonstrated success in building observability, benchmarking, or performance-focused infrastructure at scale. Experience in revenue management systems or similar domains (e.g., pricing, forecasting) is a plus. Technical Skills: Deep proficiency with performance profiling tools (e.g., perf, eBPF, VTune) and tracing systems (e.g., Jaeger, Open Telemetry). Strong understanding of OS internals, including scheduling, memory management, and IO patterns. Expertise in programming languages such as Python, Go, or Java, with a focus on runtime optimization. Key Responsibilities: Performance Analysis & Optimization: Analyze and optimize performance across the full stack, including application, middleware, runtime (e.g., Python runtime, GPU utilization), and infrastructure layers (e.g., networking, storage). Perform deep performance profiling, tuning, and optimization for databases, data pipelines, AI model inference, and distributed systems. Optimize critical components such as garbage collection (GC), memory management, IO patterns, and scheduling to ensure high efficiency. Observability & Tooling: Develop and maintain tooling and metrics to provide deep observability into system performance, enabling proactive identification of bottlenecks and inefficiencies. Implement and enhance performance monitoring systems (e.g., tracing, logging, dashboards) to track latency, throughput, and resource utilization in real-time. Contribute to benchmarking frameworks and performance-focused infrastructure to support continuous improvement. Cross-Functional Collaboration: Partner with infrastructure, platform, training, and product teams to define and achieve key performance goals for revenue management systems. Influence architecture and design decisions to prioritize latency, throughput, and scalability in large-scale data and AI systems. Align stakeholders around performance objectives, navigating ambiguity to deliver measurable improvements. Performance Testing & SLAs: Lead the development and execution of performance testing strategies, including load, stress, and scalability tests, for real-time and batch processing workloads. Define and monitor Service Level Agreements (SLAs) and Service Level Objectives (SLOs) around latency, throughput, and system reliability. Drive investigations into high-impact performance regressions or scalability issues in production, ensuring rapid resolution and root cause analysis. System Design & Scalability: Collaborate on the design of robust data architectures and AI systems, ensuring scalability and performance for revenue management use cases. Optimize real-time streaming (e.g., Apache Kafka, Flink) and batch processing (e.g., Spark, Hadoop) workloads for high-scale environments. Advocate for simplicity and rigor in system design to address complex performance challenges.

ArcOne - Team Lead - Advanced Data Systems pune,maharashtra,india 7 years None Not disclosed On-site Full Time

Position Overview The Team Lead for Advanced Data Systems Delivery will lead a team responsible for delivering high-performance, scalable data solutions, with a focus on designing and optimizing data architectures for mission-critical Tier 1 systems. This role requires deep expertise in deploying real-time and batch processing systems, utilizing the ArcOne AI platform to architect scalable topologies, tune critical components, and ensure compliance with NFRs and SLAs, while maintaining a strong foundation in data architecture principles. Key Responsibilities Project Leadership & Delivery : Lead the end-to-end delivery of advanced data systems for mission-critical Tier 1 applications, focusing on revenue management and other high-impact use cases. Ensure solutions meet NFRs (e.g., scalability, performance, availability, security) and SLAs, delivering on time, within scope, and budget. Utilize the ArcOne AI platform to design, scale, and tune system topologies to support complex real-time and batch workloads. Data Architecture Leadership Architect and oversee the implementation of robust, scalable, and secure data architectures to support mission-critical applications. Design data models, schemas, and storage solutions (e.g., data lakes, data warehouses, NoSQL databases) optimized for performance, scalability, and accessibility. Implement and optimize ETL/ELT pipelines, ensuring efficient data ingestion, transformation, and integration across distributed systems. Ensure data governance, integrity, and security standards are maintained across all data architectures. Team Management Manage, mentor, and inspire a team of data engineers, architects, and performance engineers, fostering a culture of technical excellence and collaboration. Provide technical guidance on data architecture, advanced data systems, and system optimization, ensuring alignment with industry best practices. Conduct performance reviews, set team goals, and support career development for team members. Technical Expertise Oversee the deployment of high-scale, mission-critical systems handling real-time (e.g., streaming, event-driven) and batch processing workloads. Leverage the ArcOne AI platform to design and scale system topologies, tuning components (e.g., compute, storage, network) to meet performance and scalability requirements. Optimize data processing frameworks (e.g., Apache Kafka, Spark, Hadoop, Flink) and databases (e.g., SQL, NoSQL) for low latency, high throughput, and reliability. Lead investigations into performance regressions, scalability issues, or system failures, ensuring rapid resolution and root cause analysis. Stakeholder Collaboration Act as the primary point of contact for stakeholders, translating business requirements into technical solutions with a focus on robust data architectures. Collaborate with infrastructure, platform, AI, and product teams to align on system design, performance, and data architecture goals. Communicate project progress, risks, and performance metrics to senior leadership and clients. System Design & Optimization Design and implement data architectures that support scalability, fault tolerance, and high availability for Tier 1 applications. Optimize real-time (e.g., Apache Kafka, Flink) and batch processing (e.g., Spark, Hadoop) workloads for high-scale environments. Ensure data pipelines are optimized for performance, leveraging the ArcOne AI platform to automate and enhance topology scaling and component tuning. Advocate for simplicity and rigor in system and data architecture design to address complex performance and scalability challenges. Process & Standards Define and enforce best practices for advanced data systems delivery, data architecture, CI/CD pipelines, and performance testing strategies. Develop and monitor SLAs/SLOs for latency, throughput, data availability, and system reliability, ensuring compliance with mission-critical standards. Stay updated on industry trends in advanced data systems, data architecture, and AI technologies to drive innovation within the team. Qualifications Education : Bachelors or Masters degree in Computer Science, or a related field. Experience 7+ years of experience in delivering advanced data systems or similar solutions, with at least 3 years in a leadership or team lead role. Proven track record of deploying high-scale, mission-critical Tier 1 systems with real-time and batch workloads. Extensive experience in designing and implementing data architectures for large-scale, distributed systems, including data modeling, ETL/ELT pipelines, and storage solutions (e.g., data lakes, data warehouses). Hands-on expertise with the ArcOne AI platform or similar AI-driven tools to design, scale, and tune system topologies. Deep knowledge of performance tuning for data processing frameworks (e.g., Apache Kafka, Spark, Hadoop, Flink) and databases (e.g., SQL, NoSQL, data lakes). Strong understanding of NFRs (e.g., scalability, availability, performance, security) and SLAs for Tier 1 systems. Experience in revenue management or similar domains (e.g., pricing, forecasting, optimization) is a plus. Technical Skills Proficiency in programming languages such as Python, Java, or Scala. Expertise in data processing technologies (e.g., Apache Kafka, Spark, Hadoop, Flink) and cloud platforms (e.g., AWS, Azure, GCP). Strong experience with data architecture tools and platforms, including data modeling (e.g., ERD, dimensional modeling), data lakes (e.g., Delta Lake, Iceberg), and data warehouses (e.g., Databricks, Dremio, DuckDB, Trino). Proficiency with performance profiling tools (e.g., perf, eBPF) and observability systems (e.g., Prometheus, Grafana, OpenTelemetry). Deep understanding of OS internals, networking, storage, and compute optimization (e.g., GPU utilization, memory management). Experience with containerization (e.g., Docker, Kubernetes) and data orchestration tools (e.g., Airflow, Dagster). Knowledge of AI/ML frameworks (e.g., TensorFlow, PyTorch) for optimizing data-driven solutions is a plus. Soft Skills Exceptional leadership, communication, and stakeholder management skills. Strong problem-solving abilities with a focus on simplicity, rigor, and collaboration. Ability to navigate ambiguity and drive alignment across cross-functional teams. Preferred Experience with the ArcOne AI platform for system design and optimization. Familiarity with agile methodologies and tools like Jira or Trello. Certification in cloud platforms (e.g., AWS Certified Solutions Architect), data engineering, or data architecture (e.g., TOGAF, DAMA-DMBOK) (ref:hirist.tech)

ArcOne - Performance Engineer - AI Systems pune,maharashtra,india 7 years None Not disclosed On-site Full Time

Position Overview The Performance Engineer will play a critical role in analyzing, optimizing, and scaling ArcOne's data and AI systems, with a focus on revenue management. This role involves deep performance profiling across application, middleware, runtime, and infrastructure layers, developing advanced observability tools, and collaborating with cross-functional teams to meet stringent latency, throughput, and scalability goals. Qualifications Education : Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Experience 7+ years of software engineering experience, with a strong focus on performance or reliability engineering for high-scale distributed systems. Proven expertise in optimizing performance across one or more layers of the stack (e.g, database, networking, storage, application runtime, GC tuning, Python/Golang internals, GPU utilization). Hands-on experience with real-time and batch processing frameworks (e.g, Apache Kafka, Spark, Flink). Demonstrated success in building observability, benchmarking, or performance-focused infrastructure at scale. Experience in revenue management systems or similar domains (e.g , pricing, forecasting) is a plus. Technical Skills Deep proficiency with performance profiling tools (e.g , perf, eBPF, VTune) and tracing systems (e.g , Jaeger, Open Telemetry). Strong understanding of OS internals, including scheduling, memory management, and IO patterns. Expertise in programming languages such as Python, Go, or Java, with a focus on runtime optimization. Key Responsibilities Performance Analysis & Optimization : Analyze and optimize performance across the full stack, including application, middleware, runtime (e. , Python runtime, GPU utilization), and infrastructure layers (e.g, networking, storage). Perform deep performance profiling, tuning, and optimization for databases, data pipelines, AI model inference, and distributed systems. Optimize critical components such as garbage collection (GC), memory management, IO patterns, and scheduling to ensure high efficiency. Observability & Tooling Develop and maintain tooling and metrics to provide deep observability into system performance, enabling proactive identification of bottlenecks and inefficiencies. Implement and enhance performance monitoring systems (e.g , tracing, logging, dashboards) to track latency, throughput, and resource utilization in real-time. Contribute to benchmarking frameworks and performance-focused infrastructure to support continuous improvement. Cross-Functional Collaboration Partner with infrastructure, platform, training, and product teams to define and achieve key performance goals for revenue management systems. Influence architecture and design decisions to prioritize latency, throughput, and scalability in large-scale data and AI systems. Align stakeholders around performance objectives, navigating ambiguity to deliver measurable improvements. Performance Testing & SLAs Lead the development and execution of performance testing strategies, including load, stress, and scalability tests, for real-time and batch processing workloads. Define and monitor Service Level Agreements (SLAs) and Service Level Objectives (SLOs) around latency, throughput, and system reliability. Drive investigations into high-impact performance regressions or scalability issues in production, ensuring rapid resolution and root cause analysis. System Design & Scalability Collaborate on the design of robust data architectures and AI systems, ensuring scalability and performance for revenue management use cases. Optimize real-time streaming (e.g , Apache Kafka, Flink) and batch processing (e.g , Spark, Hadoop) workloads for high-scale environments. Advocate for simplicity and rigor in system design to address complex performance challenges. (ref:hirist.tech)

AI Performance Engineer pune,maharashtra,india 7 years None Not disclosed On-site Full Time

Position Overview: The Performance Engineer will play a critical role in analyzing, optimizing, and scaling ArcOne’s data and AI systems, with a focus on revenue management. This role involves deep performance profiling across application, middleware, runtime, and infrastructure layers, developing advanced observability tools, and collaborating with cross-functional teams to meet stringent latency, throughput, and scalability goals. Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 7+ years of software engineering experience, with a strong focus on performance or reliability engineering for high-scale distributed systems. Proven expertise in optimizing performance across one or more layers of the stack (e.g., database, networking, storage, application runtime, GC tuning, Python/Golang internals, GPU utilization). Hands-on experience with real-time and batch processing frameworks (e.g., Apache Kafka, Spark, Flink). Demonstrated success in building observability, benchmarking, or performance-focused infrastructure at scale. Experience in revenue management systems or similar domains (e.g., pricing, forecasting) is a plus. Technical Skills: Deep proficiency with performance profiling tools (e.g., perf, eBPF, VTune) and tracing systems (e.g., Jaeger, Open Telemetry). Strong understanding of OS internals, including scheduling, memory management, and IO patterns. Expertise in programming languages such as Python, Go, or Java, with a focus on runtime optimization. Key Responsibilities: Performance Analysis & Optimization: Analyze and optimize performance across the full stack, including application, middleware, runtime (e.g., Python runtime, GPU utilization), and infrastructure layers (e.g., networking, storage). Perform deep performance profiling, tuning, and optimization for databases, data pipelines, AI model inference, and distributed systems. Optimize critical components such as garbage collection (GC), memory management, IO patterns, and scheduling to ensure high efficiency. Observability & Tooling: Develop and maintain tooling and metrics to provide deep observability into system performance, enabling proactive identification of bottlenecks and inefficiencies. Implement and enhance performance monitoring systems (e.g., tracing, logging, dashboards) to track latency, throughput, and resource utilization in real-time. Contribute to benchmarking frameworks and performance-focused infrastructure to support continuous improvement. Cross-Functional Collaboration: Partner with infrastructure, platform, training, and product teams to define and achieve key performance goals for revenue management systems. Influence architecture and design decisions to prioritize latency, throughput, and scalability in large-scale data and AI systems. Align stakeholders around performance objectives, navigating ambiguity to deliver measurable improvements. Performance Testing & SLAs: Lead the development and execution of performance testing strategies, including load, stress, and scalability tests, for real-time and batch processing workloads. Define and monitor Service Level Agreements (SLAs) and Service Level Objectives (SLOs) around latency, throughput, and system reliability. Drive investigations into high-impact performance regressions or scalability issues in production, ensuring rapid resolution and root cause analysis. System Design & Scalability: Collaborate on the design of robust data architectures and AI systems, ensuring scalability and performance for revenue management use cases. Optimize real-time streaming (e.g., Apache Kafka, Flink) and batch processing (e.g., Spark, Hadoop) workloads for high-scale environments. Advocate for simplicity and rigor in system design to address complex performance challenges.

ArcOne - Team Lead - Advanced Data Systems pune,maharashtra,india 7 - 9 years INR Not disclosed On-site Full Time

Position Overview The Team Lead for Advanced Data Systems Delivery will lead a team responsible for delivering high-performance, scalable data solutions, with a focus on designing and optimizing data architectures for mission-critical Tier 1 systems. This role requires deep expertise in deploying real-time and batch processing systems, utilizing the ArcOne AI platform to architect scalable topologies, tune critical components, and ensure compliance with NFRs and SLAs, while maintaining a strong foundation in data architecture principles. Key Responsibilities Project Leadership & Delivery : Lead the end-to-end delivery of advanced data systems for mission-critical Tier 1 applications, focusing on revenue management and other high-impact use cases. Ensure solutions meet NFRs (e.g., scalability, performance, availability, security) and SLAs, delivering on time, within scope, and budget. Utilize the ArcOne AI platform to design, scale, and tune system topologies to support complex real-time and batch workloads. Data Architecture Leadership Architect and oversee the implementation of robust, scalable, and secure data architectures to support mission-critical applications. Design data models, schemas, and storage solutions (e.g., data lakes, data warehouses, NoSQL databases) optimized for performance, scalability, and accessibility. Implement and optimize ETL/ELT pipelines, ensuring efficient data ingestion, transformation, and integration across distributed systems. Ensure data governance, integrity, and security standards are maintained across all data architectures. Team Management Manage, mentor, and inspire a team of data engineers, architects, and performance engineers, fostering a culture of technical excellence and collaboration. Provide technical guidance on data architecture, advanced data systems, and system optimization, ensuring alignment with industry best practices. Conduct performance reviews, set team goals, and support career development for team members. Technical Expertise Oversee the deployment of high-scale, mission-critical systems handling real-time (e.g., streaming, event-driven) and batch processing workloads. Leverage the ArcOne AI platform to design and scale system topologies, tuning components (e.g., compute, storage, network) to meet performance and scalability requirements. Optimize data processing frameworks (e.g., Apache Kafka, Spark, Hadoop, Flink) and databases (e.g., SQL, NoSQL) for low latency, high throughput, and reliability. Lead investigations into performance regressions, scalability issues, or system failures, ensuring rapid resolution and root cause analysis. Stakeholder Collaboration Act as the primary point of contact for stakeholders, translating business requirements into technical solutions with a focus on robust data architectures. Collaborate with infrastructure, platform, AI, and product teams to align on system design, performance, and data architecture goals. Communicate project progress, risks, and performance metrics to senior leadership and clients. System Design & Optimization Design and implement data architectures that support scalability, fault tolerance, and high availability for Tier 1 applications. Optimize real-time (e.g., Apache Kafka, Flink) and batch processing (e.g., Spark, Hadoop) workloads for high-scale environments. Ensure data pipelines are optimized for performance, leveraging the ArcOne AI platform to automate and enhance topology scaling and component tuning. Advocate for simplicity and rigor in system and data architecture design to address complex performance and scalability challenges. Process & Standards Define and enforce best practices for advanced data systems delivery, data architecture, CI/CD pipelines, and performance testing strategies. Develop and monitor SLAs/SLOs for latency, throughput, data availability, and system reliability, ensuring compliance with mission-critical standards. Stay updated on industry trends in advanced data systems, data architecture, and AI technologies to drive innovation within the team. Qualifications Education : Bachelors or Masters degree in Computer Science, or a related field. Experience 7+ years of experience in delivering advanced data systems or similar solutions, with at least 3 years in a leadership or team lead role. Proven track record of deploying high-scale, mission-critical Tier 1 systems with real-time and batch workloads. Extensive experience in designing and implementing data architectures for large-scale, distributed systems, including data modeling, ETL/ELT pipelines, and storage solutions (e.g., data lakes, data warehouses). Hands-on expertise with the ArcOne AI platform or similar AI-driven tools to design, scale, and tune system topologies. Deep knowledge of performance tuning for data processing frameworks (e.g., Apache Kafka, Spark, Hadoop, Flink) and databases (e.g., SQL, NoSQL, data lakes). Strong understanding of NFRs (e.g., scalability, availability, performance, security) and SLAs for Tier 1 systems. Experience in revenue management or similar domains (e.g., pricing, forecasting, optimization) is a plus. Technical Skills Proficiency in programming languages such as Python, Java, or Scala. Expertise in data processing technologies (e.g., Apache Kafka, Spark, Hadoop, Flink) and cloud platforms (e.g., AWS, Azure, GCP). Strong experience with data architecture tools and platforms, including data modeling (e.g., ERD, dimensional modeling), data lakes (e.g., Delta Lake, Iceberg), and data warehouses (e.g., Databricks, Dremio, DuckDB, Trino). Proficiency with performance profiling tools (e.g., perf, eBPF) and observability systems (e.g., Prometheus, Grafana, OpenTelemetry). Deep understanding of OS internals, networking, storage, and compute optimization (e.g., GPU utilization, memory management). Experience with containerization (e.g., Docker, Kubernetes) and data orchestration tools (e.g., Airflow, Dagster). Knowledge of AI/ML frameworks (e.g., TensorFlow, PyTorch) for optimizing data-driven solutions is a plus. Soft Skills Exceptional leadership, communication, and stakeholder management skills. Strong problem-solving abilities with a focus on simplicity, rigor, and collaboration. Ability to navigate ambiguity and drive alignment across cross-functional teams. Preferred Experience with the ArcOne AI platform for system design and optimization. Familiarity with agile methodologies and tools like Jira or Trello. Certification in cloud platforms (e.g., AWS Certified Solutions Architect), data engineering, or data architecture (e.g., TOGAF, DAMA-DMBOK) (ref:hirist.tech)