RandomTrees specializes in developing data-driven solutions using advanced machine learning algorithms to solve complex problems in various industries.
Mumbai
INR 10.0 - 15.0 Lacs P.A.
Work from Office
Full Time
We are seeking a highly skilled Senior Snowflake Developer with expertise in Python, SQL, and ETL tools to join our dynamic team. The ideal candidate will have a proven track record of designing and implementing robust data solutions on the Snowflake platform, along with strong programming skills and experience with ETL processes. Key Responsibilities: Designing and developing scalable data solutions on the Snowflake platform to support business needs and analytics requirements. Leading the end-to-end development lifecycle of data pipelines, including data ingestion, transformation, and loading processes. Writing efficient SQL queries and stored procedures to perform complex data manipulations and transformations within Snowflake. Implementing automation scripts and tools using Python to streamline data workflows and improve efficiency. Collaborating with cross-functional teams to gather requirements, design data models, and deliver high-quality solutions. Performance tuning and optimization of Snowflake databases and queries to ensure optimal performance and scalability. Implementing best practices for data governance, security, and compliance within Snowflake environments. Mentoring junior team members and providing technical guidance and support as needed. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 7+ years of experience working with Snowflake data warehouse. Strong proficiency in SQL with the ability to write complex queries and optimize performance. Extensive experience developing data pipelines and ETL processes using Python and ETL tools such as Apache Airflow, Informatica, or Talend. Strong Python coding experience needed minimum 2 yrs Solid understanding of data warehousing concepts, data modeling, and schema design. Experience working with cloud platforms such as AWS, Azure, or GCP. Excellent problem-solving and analytical skills with a keen attention to detail. Strong communication and collaboration skills with the ability to work effectively in a team environment. Any relevant certifications in Snowflake or related technologies would be a plus.
Hyderabad
INR 5.0 - 10.0 Lacs P.A.
Work from Office
Full Time
Key Responsibilities Administer and maintain AWS environments supporting data pipelines, including S3, EMR, Athena, Glue, Lambda, CloudFormation, and Redshift. Cost Analysis use AWS Cost Explorer to analyze services and usages, create dashboards to alert outliers on usage and cost Performance and Audit use AWS Cloud Trail and Cloud Watch to monitory system performance and usage Monitor, troubleshoot, and optimize infrastructure performance and availability. Provision and manage cloud resources using Infrastructure as Code (IaC) tools (e.g., AWS CloudFormation, Terraform). Collaborate with data engineers working in PySpark, Hive, Kafka, and Python to ensure infrastructure alignment with processing needs. Support code integration with GIT repositories Implement and maintain security policies, IAM roles, and access controls. Participate in incident response and support resolution of operational issues, including on-call responsibilities. Manage backup, recovery, and disaster recovery processes for AWS-hosted data and services. Interface directly with client teams to gather requirements, provide updates, and resolve issues professionally. Create and maintain technical documentation and operational runbooks Required Qualifications 3+ years of hands-on administration experience managing AWS infrastructure, particularly in support of data-centric workloads. Strong knowledge of AWS services including but not limited to S3, EMR, Glue, Lambda, Redshift, and Athena. Experience with infrastructure automation and configuration management tools (e.g., CloudFormation, Terraform, AWS CLI). Proficiency in Linux administration and shell scripting, including Installing and managing software on Linux servers Familiarity with Kafka, Hive, and distributed processing frameworks such as Apache Spark. Ability to manage and troubleshoot IAM configurations, networking, and cloud security best practices. Demonstrated experience in monitoring tools (e.g., CloudWatch, Prometheus, Grafana) and alerting systems. Excellent verbal and written communication skills. Comfortable working with cross-functional teams and engaging directly with clients. Preferred Qualifications AWS Certification (e.g., Solutions Architect Associate, SysOps Administrator) Experience supporting data science or analytics teams Familiarity with DevOps practices and CI/CD pipelines Familiarity with Apache Icebergbased data pipelines
Chennai
INR 10.0 - 20.0 Lacs P.A.
Hybrid
Full Time
Role: ML Engineer Experience: 3 to 15 Years Chennai location must or relocation also fine- Hybrid mode We have good budget Job Description: Responsibilities: Strong understanding of ML algorithms, techniques, and best practices. Strong understanding of Databricks, Azure AI services and other ML platforms and cloud computing platforms (e.g., AWS, Azure, GCP) and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of Mlflow or Kubeflow frameworks Strong programming skills in python and Data analytical expertise Experience in building Gen AI based solutions like chatbots using RAG approaches Expertise in any of the gen ai frameworks such as Langchain/ Langgraph, autogen, crewai, etc. Requirements: Proven experience as a Machine Learning Engineer, Data Scientist, or similar role, with a focus on product matching, image matching, and LLM. Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with product matching algorithms and image recognition techniques. Experience with natural language processing and large language models (LLMs) such as GPT, BERT, or similar architectures. Optimize and fine-tune models for performance and scalability.. Collaborate with cross-functional teams to integrate ML solutions into products. Stay updated with the latest advancements in AI and machine learning.
Chennai
INR 40.0 - 40.0 Lacs P.A.
Hybrid
Full Time
Data Architect/Engineer and implement data solutions across Retail industry(SCM, Marketing, Sales, and Customer Service , using technologies such as DBT , Snowflake , and Azure/AWS/GCP . Design and optimize data pipelines that integrate various data sources (1st party, 3rd party, operational) to support business intelligence and advanced analytics. Develop data models and data flows that enable personalized customer experiences and support omnichannel marketing and customer engagement. Lead efforts to ensure data governance , data quality , and data security , adhering to compliance with regulations such as GDPR and CCPA . Implement and maintain data warehousing solutions in Snowflake to handle large-scale data processing and analytics needs. Optimize workflows using DBT to streamline data transformation and modeling processes. Leverage Azure for cloud infrastructure, data storage, and real-time data analytics, while ensuring the architecture supports scalability and performance. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to ensure data architectures meet business needs. Support both real-time and batch data integration , ensuring data is accessible for actionable insights and decision-making. Continuously assess and integrate new data technologies and methodologies to enhance the organizations data capabilities. Qualifications: 6+ years of experience in Data Architecture or Data Engineering, with specific expertise in DBT , Snowflake , and Azure/AWS/GCP . Strong understanding of data modeling , ETL/ELT processes , and modern data architecture frameworks. Experience designing scalable data architectures for personalization and customer analytics across marketing, sales, and customer service domains. Expertise with cloud data platforms (Azure preferred) and Big Data technologies for large-scale data processing. Hands-on experience with Python for data engineering tasks and scripting. Proven track record of building and managing data pipelines and data warehousing solutions using Snowflake . Familiarity with Customer Data Platforms (CDP) , Master Data Management (MDM) , and Customer 360 architectures. Strong problem-solving skills and ability to work with cross-functional teams to translate business requirements into scalable data solutions. Role & responsibilities
Chennai
INR 10.0 - 20.0 Lacs P.A.
Hybrid
Full Time
Requirement Role: ML Engineer/Data Scientist Experience: 3 to 15 Years Job Description: Responsibilities: Strong understanding of ML algorithms, techniques, and best practices. Strong understanding of Databricks, Azure AI services and other ML platforms and cloud computing platforms (e.g., AWS, Azure, GCP) and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Strong understanding of Mlflow or Kubeflow frameworks Strong programming skills in python and Data analytical expertise Experience in building Gen AI based solutions like chatbots using RAG approaches Expertise in any of the gen ai frameworks such as Langchain/ Langgraph, autogen, crewai, etc. Requirements: Proven experience as a Machine Learning Engineer, Data Scientist, or similar role, with a focus on product matching, image matching, and LLM. Solid understanding of machine learning algorithms and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Hands-on experience with product matching algorithms and image recognition techniques. Experience with natural language processing and large language models (LLMs) such as GPT, BERT, or similar architectures. Optimize and fine-tune models for performance and scalability.. Collaborate with cross-functional teams to integrate ML solutions into products. Stay updated with the latest advancements in AI and machine learning.
Hyderabad
INR 7.0 - 12.0 Lacs P.A.
Remote
Full Time
5+ years of Commercial Analytics experience in Pharma/Healthcare industry (must have). Excellent communication skills. Strong stakeholder and project management skills. Good proficiency in SQL(must have). Working Knowledge of Snowflake, good to have. Knowledge of at least one BI tool, Microstrategy preferred. Should have worked on Commercial and Call Activity data. Exposure to pharma datasets from IMS, IQVIA or other similar vendors
Hyderabad
INR 4.0 - 7.0 Lacs P.A.
Work from Office
Full Time
The Data Steward will play a critical role in ensuring data integrity, quality, and governance within SAP systems. The responsibilities include: Data Governance: o Define ownership and accountability for critical data assets to ensure they are effectively managed and maintain integrity throughout systems. o Collaborate with business and IT teams to enforce data governance policies, ensuring alignment with enterprise data standards. Data Quality Management: o Promote data accuracy and adherence to defined data management and governance practices. o Identify and resolve data discrepancies to enhance operational efficiency. Data Integration and Maintenance: o Manage and maintain master data quality for Finance and Material domains within the SAP system. o Support SAP data migrations, validations, and audits to ensure seamless data integration. Compliance and Reporting: o Ensure compliance with regulatory and company data standards. o Develop and distribute recommendations and supporting documentation for new or proposed data standards, business rules, and policies.
Hyderabad
INR 4.0 - 7.0 Lacs P.A.
Work from Office
Full Time
Required Skills & Qualifications: Proficient in React.js and its core principles. Strong understanding of JavaScript , HTML5 , and CSS3 . Experience with popular React workflows such as Redux , Context API , or similar. Familiarity with RESTful APIs and modern authorization mechanisms (e.g., JWT). Understanding of responsive design and cross-browser compatibility. Experience with tools like Webpack , Babel , NPM , etc. Familiarity with Git version control. Strong problem-solving skills and attention to detail. Good to Have: Experience or exposure to Business Intelligence (BI) tools and dashboards (e.g., Power BI, Tableau, Looker). Understanding of data visualization best practices. Knowledge of data-driven application development.
Chennai
INR 7.0 - 17.0 Lacs P.A.
Hybrid
Full Time
We are seeking a highly skilled and motivated Azure Data Engineer to join our growing data team. In this role, you will be responsible for designing, developing, and maintaining scalable and robust data pipelines and data solutions within the Microsoft Azure ecosystem. You will work closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into effective data architectures. The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and a deep understanding of Azure data services. Responsibilities: Design, build, and maintain scalable and efficient data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, or other relevant Azure services. Develop and optimize data ingestion processes from various source systems (on-premises, cloud, third-party APIs) into Azure data platforms. Implement data warehousing solutions, including dimensional modeling and data lake strategies, using Azure Synapse Analytics, Azure Data Lake Storage Gen2, or Azure SQL Database. Write, optimize, and maintain complex SQL queries, stored procedures, and data transformation scripts. Develop and manage data quality checks, data validation processes, and data governance policies. Monitor and troubleshoot data pipeline issues, ensuring data accuracy and availability. Collaborate with data scientists and analysts to support their data needs for reporting, analytics, and machine learning initiatives. Implement security best practices for data storage and access within Azure. Participate in code reviews, contribute to architectural discussions, and promote best practices in data engineering. Stay up-to-date with the latest Azure data technologies and trends, proposing and implementing improvements where applicable. Document data flows, architectures, and operational procedures. Qualifications: Required: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3 to 5 years of professional experience as a Data Engineer, with a strong focus on Microsoft Azure data platforms. Proven experience with Azure Data Factory for orchestration and ETL/ELT. Solid understanding and hands-on experience with Azure Synapse Analytics (SQL Pool, Spark Pool) or Azure SQL Data Warehouse. Proficiency in SQL and experience with relational databases. Experience with Azure Data Lake Storage Gen2. Familiarity with data modeling, data warehousing concepts (e.g., Kimball methodology), and ETL/ELT processes. Strong programming skills in Python or Spark (PySpark). Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.
Chennai
INR 5.0 - 15.0 Lacs P.A.
Hybrid
Full Time
Job Summary: We are seeking a highly skilled and passionate Data Scientist with 3-6 years of experience to join our dynamic team in Chennai. The ideal candidate will possess a strong background in machine learning and deep learning methodologies, coupled with expert-level proficiency in PySpark and SQL for large-scale data manipulation and analysis. You will be instrumental in transforming complex data into actionable insights, building predictive models, and deploying robust data-driven solutions that directly impact our business objectives. Key Responsibilities: Data Analysis & Feature Engineering: Perform extensive exploratory data analysis (EDA) to identify trends, patterns, and anomalies in large, complex datasets. Develop and implement robust data preprocessing, cleaning, and feature engineering pipelines using PySpark and SQL to prepare data for model training. Work with structured and unstructured data, ensuring data quality and integrity. Model Development & Implementation (Machine Learning & Deep Learning): Design, develop, and implement advanced Machine Learning (ML) and Deep Learning (DL) models to solve complex business problems, such as prediction, classification, recommendation, and anomaly detection. Apply a wide range of ML algorithms (e.g., Regression, Classification, Clustering, Ensemble methods) and DL architectures (e.g., CNNs, RNNs, Transformers) as appropriate for the problem at hand. Optimize and fine-tune models for performance, accuracy, and scalability. Experience with ML/DL frameworks such as TensorFlow, PyTorch, Scikit-learn, etc. Big Data Processing: Leverage PySpark extensively for distributed data processing, ETL operations, and running machine learning algorithms on big data platforms (e.g., Hadoop, Databricks, Spark clusters). Write efficient and optimized SQL queries for data extraction, transformation, and loading from relational and non-relational databases. Deployment & MLOps: Collaborate with MLOps engineers and software development teams to integrate and deploy machine learning and deep learning models into production environments. Monitor model performance, identify degradation, and implement retraining strategies to ensure sustained accuracy and relevance. Contribute to building CI/CD pipelines for ML model deployment. Insights & Communication: Translate complex analytical findings and model results into clear, concise, and actionable insights for both technical and non-technical stakeholders. Create compelling data visualizations and reports to effectively communicate findings and recommendations. Act as a subject matter expert, guiding business teams on data-driven decision-making. Research & Innovation: Stay abreast of the latest advancements in data science, machine learning, deep learning, and big data technologies. Proactively identify opportunities to apply new techniques and tools to enhance existing solutions or develop new capabilities. Required Qualifications: Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Engineering, or a related quantitative field. 3-6 years of hands-on experience as a Data Scientist or in a similar role. Expert proficiency in Python for data science, including libraries such as Pandas, NumPy, Scikit-learn. Strong expertise in PySpark for large-scale data processing and machine learning. Advanced SQL skills with the ability to write complex, optimized queries for data extraction and manipulation. Proven experience in applying Machine Learning algorithms to real-world problems. Solid understanding and hands-on experience with Deep Learning frameworks (e.g., TensorFlow, PyTorch) and architectures. Experience with big data technologies like Hadoop or Spark ecosystem. Strong understanding of statistical concepts, hypothesis testing, and experimental design. Excellent problem-solving, analytical, and critical thinking skills. Ability to work independently and collaboratively in a fast-paced environment. Strong communication and presentation skills, with the ability to explain complex technical concepts to diverse audiences.
Chennai
INR 10.0 - 20.0 Lacs P.A.
Work from Office
Full Time
Job Summary: We are seeking a talented and driven Machine Learning Engineer with 2-5 years of experience to join our dynamic team in Chennai. The ideal candidate will have a strong foundation in machine learning principles and extensive hands-on experience in building, deploying, and managing ML models in production environments. A key focus of this role will be on MLOps practices and orchestration, ensuring our ML pipelines are robust, scalable, and automated. Key Responsibilities: ML Model Deployment & Management: Design, develop, and implement end-to-end MLOps pipelines for deploying, monitoring, and managing machine learning models in production. Orchestration: Utilize orchestration tools (e.g., Apache Airflow, Kubeflow, AWS Step Functions, Azure Data Factory) to automate ML workflows, including data ingestion, feature engineering, model training, validation, and deployment. CI/CD for ML: Implement Continuous Integration/Continuous Deployment (CI/CD) practices for ML code, models, and infrastructure, ensuring rapid and reliable releases. Monitoring & Alerting: Establish comprehensive monitoring and alerting systems for deployed ML models to track performance, detect data drift, model drift, and ensure operational health. Infrastructure as Code (IaC): Work with IaC tools (e.g., Terraform, CloudFormation) to manage and provision cloud resources required for ML workflows. Containerization: Leverage containerization technologies (Docker, Kubernetes) for packaging and deploying ML models and their dependencies. Collaboration: Collaborate closely with Data Scientists, Data Engineers, and Software Developers to translate research prototypes into production-ready ML solutions. Performance Optimization: Optimize ML model inference and training performance, focusing on efficiency, scalability, and cost-effectiveness. Troubleshooting & Debugging: Troubleshoot and debug issues across the entire ML lifecycle, from data pipelines to model serving. Documentation: Create and maintain clear technical documentation for MLOps processes, pipelines, and infrastructure. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field. 2-5 years of professional experience as a Machine Learning Engineer, MLOps Engineer, or a similar role. Strong proficiency in Python and its ML ecosystem (e.g., scikit-learn, TensorFlow, PyTorch, Pandas, NumPy). Hands-on experience with at least one major cloud platform (AWS, Azure, GCP) and their relevant ML/MLOps services (e.g., AWS SageMaker, Azure ML, GCP Vertex AI). Proven experience with orchestration tools like Apache Airflow, Kubeflow, or similar. Solid understanding and practical experience with MLOps principles and best practices. Experience with containerization technologies (Docker, Kubernetes). Familiarity with CI/CD pipelines and tools (e.g., GitLab CI/CD, Jenkins, Azure DevOps, AWS CodePipeline). Knowledge of database systems (SQL and NoSQL). Excellent problem-solving, analytical, and debugging skills. Strong communication and collaboration abilities, with a capacity to work effectively in an Agile environment.
Chennai
INR 7.0 - 17.0 Lacs P.A.
Work from Office
Full Time
About the Role: We are looking for a highly skilled and passionate Data Engineer to join our dynamic data team. In this role, you will be instrumental in designing, building, and optimizing our data infrastructure, with a strong emphasis on leveraging Snowflake for data warehousing and dbt (data build tool) for data transformation. You will work across various cloud environments (AWS, Azure, GCP), ensuring our data solutions are scalable, reliable, and efficient. This position requires a deep understanding of data warehousing principles, ETL/ELT methodologies, and a commitment to data quality and governance. Responsibilities: Design, develop, and maintain robust and scalable data pipelines using various data integration tools and techniques within a cloud environment. Build and optimize data models and transformations in Snowflake using dbt, ensuring data accuracy, consistency, and performance. Manage and administer Snowflake environments, including performance tuning, cost optimization, and security configurations. Develop and implement data ingestion strategies from diverse source systems (APIs, databases, files, streaming data) into Snowflake. Write, optimize, and maintain complex SQL queries for data extraction, transformation, and loading (ETL/ELT) processes. Implement data quality checks, validation rules, and monitoring solutions within dbt and Snowflake. Collaborate closely with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into efficient data solutions. Promote and enforce data governance best practices, including metadata management, data lineage, and documentation. Participate in code reviews, contribute to architectural discussions, and champion best practices in data engineering and dbt development. Troubleshoot and resolve data-related issues, ensuring data availability and reliability. Stay current with industry trends and new technologies in the data engineering space, particularly around Snowflake, dbt, and cloud platforms. Qualifications: Required: Bachelor's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 3 to 5 years of professional experience as a Data Engineer. Expert-level proficiency with Snowflake for data warehousing, including performance optimization and resource management. Extensive hands-on experience with dbt (data build tool) for data modeling, testing, and documentation. Strong proficiency in SQL, with the ability to write complex, optimized queries. Solid programming skills in Python for data manipulation, scripting, and automation. Experience with at least one major cloud platform (AWS, Azure, or GCP) and its core data services. Proven understanding of data warehousing concepts, dimensional modeling, and ETL/ELT principles. Experience with version control systems (e.g., Git). Excellent analytical, problem-solving, and debugging skills. Strong communication and collaboration abilities, with a capacity to work effectively with cross-functional teams.
Hyderabad
INR 10.0 - 20.0 Lacs P.A.
Hybrid
Full Time
RandomTrees is a leading Data & AI company offering a diverse range of products and services within the data and AI space. We are seeking a skilled Big Data Engineer. As a strategic partner of IBM, we support multiple industries, including Pharma, Banking, Semiconductor, Oil & Gas, and more. Additionally, we are actively engaged in research and innovation in Generative AI (GenAI) and Conversational AI. Headquartered in the United States, we also have offices in Hyderabad and Chennai, India. Job Title: Big Data Engineer Experience: 5-9 Years Location: Hyderabad-Hybrid Employment Type: Full-Time Job Summary: We are seeking a skilled Big Data Engineer with 5-9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred. Required Skills: 59 years of hands-on experience as a Big Data Engineer. Strong proficiency in Apache Spark (PySpark or Scala). Solid understanding and experience with SQL and database optimization. Experience with data lake or data warehouse environments and architecture patterns. Good understanding of data modeling, performance tuning, and partitioning strategies. Experience in working with large-scale distributed systems and batch/stream data processing. Preferred Qualifications: Experience with cloud platforms preferably GCP or AWS, Azure. Education: Bachelors degree in Computer Science, Engineering, or a related field.
Hyderabad
INR 3.0 - 6.0 Lacs P.A.
Work from Office
Full Time
Mid-level Snowflake engineers with 4-5 years of experience in Data warehousing and 1-2 years in snowflake. Good academic background, good com skills and strong in SQL and Python, good data warehousing and data modeling exp.
Pune, Chennai, Bengaluru
INR 10.0 - 14.0 Lacs P.A.
Work from Office
Full Time
Minimum 3-8 years of industry experience for Bachelor in embedded system designs, preferably in automotive industry Broad knowledge of embedded HW and SW 3+ years of experience in system level cyber security concepts and architecture Working knowledge of embedded operating systems.( QNX, Free RTOS, AUTOSAR etc) Working experience on the support of hardware on cyber security (like HSM/SHE). Working experience on the Secure Boot, Secure JTag, Secure HW/SW, Secure Storage etc for microprocessors. Sound knowledge on Symmetric/Asymmetric Encryption/Decryption and signature handling. Good knowledge on the implications of cyber security on the production (Secure flashing, tools required, etc.) Desirable to have the knowledge of AUTOSAR support for Cyber Security Working knowledge of development processes and process models such as CMMI or ASPICE. Hands on root cause analysis processes & Strong attention to detail. Exemplary verbal and written communication skills. Creative problem-solver capable of creating and reproducing complex System/functional defects.
Hyderabad
INR 3.0 - 7.0 Lacs P.A.
Work from Office
Full Time
1. Role: ML Engineer Must have: 4+ years of experience Strong Understanding in statistics. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Tree text mining, ensemble techniques etc. Strong skills in software prototyping and engineering with expertise in Python is required. Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, neural networks, etc. Knowledge in understanding the Time-series, data pattern & Product data from the Manufacturing Process. Qualification & Skills: Preferred Degree in Mechanical Engineering, Computer Science, ECE, Statistics, Applied Math or related field. 4+ years of practical experience with ML Projects, data processing, database programming and data analytics Extensive background in data mining and statistical analysis Able to understand various data structures and common methods in data transformation. Excellent pattern recognition, predictive modelling skills Experience with Business Intelligence Tools like Power BI is an asset.
Hyderabad
INR 7.0 - 10.0 Lacs P.A.
Work from Office
Full Time
NodeJS developer with .Net Core knowledge. Experience of Building strong web applications (RESTAPIs) using Node.js Backend ( Express framework ) Experience in any Nosql databases ( eg : mongodb ). Knowledge of typescript is a plus Knowledge on debugging , monitoring and maintenance of services. Experience on Version Control Systems ( GIT / Github / Gitlab ) Good to have knowledge / experience in working on Azure or Azure Devops.
Hyderabad
INR 7.0 - 10.0 Lacs P.A.
Work from Office
Full Time
Location: - Banglore & Mysore Exp: 5 - 8 years Autosar BSW and RTOS: Having hands on integration experience in Autosar CAN Com, Diag and BSW Stack. Able to understand reported defects and able to debug independently Configure BSW (Basic Software) according to OEM requirements Integrate 3rd party AUTOSAR or legacy modules with EB tresos AutoCore Good Embedded C programming skills and debugging skills Good knowledge of 32-bit microcontrollers (e.g. NXP iMX8, Infineon Traveo 2 etc.) Good knowledge in embedded software development environments and tools including IDE, editors, compilers, linkers, emulators, debuggers, analysis and monitoring tools Ability to work independently and in small teams.
Hyderabad, Bengaluru
INR 5.0 - 8.0 Lacs P.A.
Work from Office
Full Time
L ocation: - Bangalore Exp: 6- 10 years Job Description: Good exp in INFOTAINMENT features like Bluetooth, FM, USB, AUX Good knowledge at tools like CAn, CANoe and CANalyzer Good Experience in cluster, Radioss, Hands on knowledge on phone projection testing with infotainment systems such as Android Auto and Car Play. Should know how to create test cases, test specs Good knowledge on various testing methods Good Experience in Python Programming concepts Good Knowledge in verification and validation for On - Bench and In-Vehicle testing If Sounds interesting please share your resume ASAP to with below details, EXP: CCTC: ECTC: NP:
Mysuru, Bengaluru
INR 4.0 - 8.0 Lacs P.A.
Work from Office
Full Time
Having hands on integration experience in Autosar CAN Com, Diag and BSW Stack. Able to understand reported defects and able to debug independently Configure BSW (Basic Software) according to OEM requirements Integrate 3rd party AUTOSAR or legacy modules with EB tresos AutoCore Good Embedded C programming skills and debugging skills Good knowledge of 32-bit microcontrollers (e.g. NXP iMX8, Infineon Traveo 2 etc.) Good knowledge in embedded software development environments and tools including IDE, editors, compilers, linkers, emulators, debuggers, analysis and monitoring tools Ability to work independently and in small teams.
FIND ON MAP
Company Reviews
View ReviewsBrowse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.