Bengaluru, Karnataka, India
Not disclosed
On-site
Full Time
Data Scientist @ DevOn The ideal candidate's favorite words are learning, data, scale, and agility. You will leverage your strong collaboration skills and ability to extract valuable insights from highly complex data sets to ask the right questions and find the right answers. You have at least 3 years of experience as a Data Scientist. Responsibilities 🚀 We are Hiring – Data Scientists (Immediate to 15 Days Joiners) Are you passionate about building models that make a real-world impact? At DevOn , you’ll: 🔹 Work on real-world classification & regression problems 🔹 Build models using XGBoost, LightGBM, Random Forest 🔹 Perform feature engineering , scaling , handle missing data & outliers 🔹 Apply model evaluation techniques – PR Curve, F1, Cross-validation, R² 🔹 Tune hyperparameters and deploy using CI/CD, Docker, AWS 🔹 Collaborate with engineers and analysts to take models to production 🔹 Document experiments, build reproducible pipelines, and present insights Your profile: ✔ Strong hands-on experience in ML model development end-to-end ✔ Solid understanding of model performance metrics and tuning ✔ Expertise in Python, SQL, GitHub, Jupyter, TensorFlow/PyTorch ✔ Knowledge of bias-variance tradeoff , bagging vs boosting , IQR, etc. ✔ Experience with cloud platforms (AWS preferred) ✔ Bonus: A/B testing or causal inference exposure 📩 Send your resume to: TA-IN-Consulting@devon.nl 📅 Looking for immediate joiners or max 15 days Qualifications Bachelor's degree or equivalent experience in quantative field (Statistics, Mathematics, Computer Science, Engineering, etc.) At least 3-years of experience in quantitative analytics or data modeling Deep understanding of predictive modeling, machine-learning, clustering and classification techniques, and algorithms Fluency in a programming language (Python & SQL) Show more Show less
Bengaluru, Karnataka, India
None Not disclosed
Remote
Full Time
Company Description DevOn is your digital transformation partner, specializing in DevOps, continuous delivery, test automation, cloud, agile, security, and AI solutions. We help enterprises achieve faster time-to-market, deliver high-quality software, reduce costs, build high-performance teams, and leverage AI for smarter software delivery. Role Description This is a full-time hybrid role for a Power BI Developer at DevOn. The Power BI Developer will be responsible for designing, developing, and maintaining business intelligence solutions using Power BI. The role will involve working in Bengaluru with the flexibility to work from home as needed. Qualifications PowerBI experience with Visuals and multisource data handling PowerBI - Query optimisation , Access Mangement & Drill Down Functionality Understand Analytics and User Experience towards Reporting Service Now exp with Project Management and Incident Management using APIs Good to Have : Experience with Azure Devops APIs , Odata Queries and Analytics View
karnataka
INR Not disclosed
On-site
Full Time
On DevOn, a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics, the dynamic team is dedicated to solving complex business problems through technology. We are currently seeking a skilled and motivated Data Engineer Lead to join our team. As a Data Engineer Lead, your primary responsibility will be to lead the design, development, and maintenance of data pipelines and ETL workflows utilizing modern cloud technologies. You will collaborate closely with cross-functional teams to ensure data availability, reliability, and scalability, facilitating data-driven decision-making throughout the organization. This role necessitates a deep understanding of Python, PySpark, AWS Glue, RedShift, SQL, Jenkins, Bitbucket, EKS, and Airflow. Key Responsibilities: - Lead the design and implementation of scalable data pipelines and ETL workflows in a cloud environment, primarily AWS. - Develop and manage data ingestion, transformation, and storage frameworks using AWS Glue, PySpark, and RedShift. - Architect and optimize complex SQL queries for large datasets to maintain data integrity across systems. - Work with data scientists, analysts, and business stakeholders to comprehend data requirements and provide high-quality data solutions. - Automate the end-to-end data pipeline process using Jenkins and Bitbucket, ensuring efficient CI/CD practices. - Optimize and oversee data orchestration utilizing Apache Airflow. - Offer technical leadership and mentorship to junior team members, ensuring adherence to best practices for data engineering. - Utilize AWS services like RedShift, S3, Lambda, and EKS for the deployment and management of data solutions. - Troubleshoot and resolve complex data pipeline issues, minimizing downtime and ensuring high availability. - Engage in architecture and design reviews, contributing insights on technical solutions and enhancements. - Continuously assess new tools and technologies to enhance the efficiency and scalability of our data infrastructure. Required Skills and Qualifications: - 5+ years of professional experience in Data Engineering, showcasing a track record of building scalable data pipelines and ETL workflows. - Proficiency in Python for data processing and scripting. - Hands-on experience with PySpark for large-scale data processing. - Comprehensive knowledge of AWS Glue, RedShift, S3, and other AWS services. - Advanced skills in SQL for data manipulation and optimization. - Experience with Jenkins and Bitbucket for CI/CD automation. - Familiarity with EKS (Elastic Kubernetes Service) for containerized deployment of data applications. - Proficiency in Apache Airflow for data orchestration and workflow automation. - Strong problem-solving abilities and the capability to debug complex issues in data workflows. - Excellent communication skills, enabling collaboration with cross-functional teams and clear explanation of technical concepts. - Ability to work in an Agile development environment, managing multiple priorities and meeting tight deadlines. Preferred Qualifications: - Experience with additional AWS services like Lambda, Redshift Spectrum, Athena. - Familiarity with Docker and container orchestration technologies such as Kubernetes. - Knowledge of data modeling and data warehousing concepts. - Bachelor's or Master's degree in Computer Science, Engineering, or a related field.,
karnataka
INR Not disclosed
On-site
Full Time
As a .NET Engineer with cloud experience, you will be responsible for designing, modifying, writing, and implementing software programming components and applications. Additionally, you will install or support the software component and application, maintain process flow and documentation, and work from pre-written specifications and guidelines. Your responsibilities and duties will include having a good experience with .NET core, Azure, SQL Server, writing unit tests, and EF.Core. You should be analytical and able to understand complex systems, gather requirements, come up with solutions, and discuss them with stakeholders. Having React experience would be a plus. The minimum qualification required for this position is any graduate or post-graduate degree (BE / B.Tech / ME/ M.Tech / MCA / MS), with at least 6+ years of experience. Specific skills needed include ASP.NET core, Azure, SQL Server, writing unit tests, and EF.Core. This position is based in Bangalore, India. As a candidate, you should be passionate about adapting to new technologies, a strong team player with good communication skills, and capable of working on challenging tasks to solve complex issues in the project. In terms of perks and benefits, the company offers healthcare benefits, exceptional care, savings for tomorrow, flexi-working benefits, professional development opportunities, wellness benefits, family benefits, financial benefits, and F&B benefits. For more details on benefits, please refer to the Benefits PDF provided for India, Netherlands, and Poland.,
karnataka
INR Not disclosed
On-site
Full Time
DevOn is a leading provider of innovative technology solutions focusing on data-driven decision-making, cloud computing, and advanced analytics. We are passionate about solving complex business problems through technology and currently seeking a skilled and motivated Data Engineer Lead to join our dynamic team. As a Data Engineer Lead at DevOn, your primary responsibility will be to lead the design, development, and maintenance of data pipelines and ETL workflows, utilizing modern cloud technologies. You will collaborate closely with cross-functional teams to ensure data availability, reliability, and scalability, facilitating data-driven decision-making throughout the organization. This role necessitates a profound understanding of Python, PySpark, AWS Glue, RedShift, SQL, Jenkins, Bitbucket, EKS, and Airflow. Key Responsibilities: - Lead the design and implementation of scalable data pipelines and ETL workflows in a cloud environment, primarily AWS. - Develop and manage data ingestion, transformation, and storage frameworks using AWS Glue, PySpark, and RedShift. - Architect and enhance complex SQL queries for large datasets, ensuring data integrity across systems. - Work collaboratively with data scientists, analysts, and business stakeholders to comprehend data requirements and deliver high-quality data solutions. - Automate the end-to-end data pipeline process using Jenkins and Bitbucket, promoting efficient CI/CD practices. - Manage and optimize data orchestration utilizing Apache Airflow. - Provide technical leadership and mentorship to junior team members, ensuring adherence to best practices in data engineering. - Utilize AWS services such as RedShift, S3, Lambda, and EKS for deploying and managing data solutions. - Resolve complex data pipeline issues promptly, ensuring minimal downtime and high availability. - Participate in architecture and design reviews, contributing insights on technical solutions and enhancements. - Continuously assess new tools and technologies to enhance the efficiency and scalability of our data infrastructure. Required Skills and Qualifications: - 5+ years of professional experience in Data Engineering, demonstrating expertise in building scalable data pipelines and ETL workflows. - Proficiency in Python for data processing and scripting. - Hands-on experience with PySpark for large-scale data processing. - Thorough knowledge of AWS Glue, RedShift, S3, and other AWS services. - Advanced skills in SQL for data manipulation and optimization. - Experience with Jenkins and Bitbucket for CI/CD automation. - Familiarity with EKS (Elastic Kubernetes Service) for containerized deployment of data applications. - Proficiency in Apache Airflow for data orchestration and workflow automation. - Strong problem-solving abilities and adeptness in debugging complex data workflow issues. - Excellent communication skills to collaborate effectively with cross-functional teams and articulate technical concepts clearly. - Ability to work in an Agile development environment, managing multiple priorities and meeting tight deadlines. Preferred Qualifications: - Experience with additional AWS services (e.g., Lambda, Redshift Spectrum, Athena). - Familiarity with Docker and container orchestration technologies like Kubernetes. - Knowledge of data modeling and data warehousing concepts. - Bachelor's or Master's degree in Computer Science, Engineering, or a related field.,
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.