Jobs
Interviews

4 Warehousing Concepts Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

In this role, you will be responsible for: - Building & maintaining ETL/ELT pipelines to ensure data accuracy and quality. - Working with large datasets to collect, clean, and prepare data for analysis. - Designing & implementing predictive models and collaborating with Data Science teams. - Creating dashboards & reports using tools like Tableau, Power BI, matplotlib, and Seaborn. - Working with cloud platforms such as AWS, Azure, GCP, and databases like SQL, MongoDB, PostgreSQL, and MySQL. - Collaborating with cross-functional teams to solve business problems using data. Preferred Skills: - Proficiency in Big Data tools like Spark, Hadoop. - Experience with ML Frameworks such as TensorFlow, PyTorch, Scikit-learn. - Strong programming skills in Python/R, particularly with Pandas and NumPy. - Knowledge of data engineering & warehousing concepts. As for additional details of the company, there is a focus on employee well-being and growth opportunities, including: - Health Insurance - Occasional WFH options as per policy - Paid time-off for personal needs - Exposure to client interactions & real-world business challenges - Opportunity to work on cutting-edge cross-functional projects Please note that this is a full-time position with the work location being in person.,

Posted 14 hours ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

This is a full-time Data Engineer position with D Square Consulting Services Pvt Ltd, based in Pan-India with a hybrid work model. You should have at least 5 years of experience and be able to join immediately. As a Data Engineer, you will be responsible for designing, building, and scaling data pipelines and backend services supporting analytics and business intelligence platforms. A strong technical foundation, Python expertise, API development experience, and familiarity with containerized CI/CD-driven workflows are essential for this role. Your key responsibilities will include designing, implementing, and optimizing data pipelines and ETL workflows using Python tools, building RESTful and/or GraphQL APIs, collaborating with cross-functional teams, containerizing data services with Docker, managing deployments with Kubernetes, developing CI/CD pipelines using GitHub Actions, ensuring code quality, and optimizing data access and transformation. The required skills and qualifications for this role include a Bachelor's or Master's degree in Computer Science or a related field, 5+ years of hands-on experience in data engineering or backend development, expert-level Python skills, experience with building APIs using frameworks like FastAPI, Graphene, or Strawberry, proficiency in Docker, Kubernetes, SQL, and data modeling, good communication skills, familiarity with data orchestration tools, experience with streaming data platforms like Kafka or Spark, knowledge of data governance, security, and observability best practices, and exposure to cloud platforms like AWS, GCP, or Azure. If you are proactive, self-driven, and possess the required technical skills, then this Data Engineer position is an exciting opportunity for you to contribute to the development of cutting-edge data solutions at D Square Consulting Services Pvt Ltd.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As an organization with over 26 years of experience in delivering Software Product Development, Quality Engineering, and Digital Transformation Consulting Services to Global SMEs & Large Enterprises, CES has established long-term relationships with leading Fortune 500 Companies across various industries such as Automotive, AgTech, Bio Science, EdTech, FinTech, Manufacturing, Online Retailers, and Investment Banks. These relationships, spanning over a decade, are built on our commitment to timely delivery of quality services, investments in technology innovations, and fostering a true partnership mindset with our customers. In our current phase of exponential growth, we maintain a consistent focus on continuous improvement and a process-oriented culture. To further support our accelerated growth, we are seeking qualified and committed individuals to join us and play an exceptional role. You can learn more about us at: http://www.cesltd.com/ Experience with Azure Synapse Analytics is a key requirement for this role. The ideal candidate should have hands-on experience in designing, developing, and deploying solutions using Azure Synapse Analytics, including a good understanding of its various components such as SQL pools, Spark pools, and Integration Runtimes. Proficiency in Azure Data Lake Storage is also essential, with a deep understanding of its architecture, features, and best practices for managing a large-scale Data Lake or Lakehouse in an Azure environment. Moreover, the candidate should have experience with AI Tools and LLMs (e.g. GitHub Copilot, Copilot, ChatGPT) for automating responsibilities related to the role. Knowledge of Avro and Parquet file formats is required, including experience in data serialization, compression techniques, and schema evolution in a big data environment. Prior experience working with data in a healthcare or clinical laboratory setting is highly desirable, along with a strong understanding of PHI, GDPR, HIPPA, and HITRUST regulations. Relevant certifications such as Azure Data Engineer Associate or Azure Synapse Analytics Developer Associate are highly desirable for this position. The essential functions of the role include designing, developing, and maintaining data pipelines for ingestion, transformation, and loading of data into Azure Synapse Analytics, as well as working on data models, SQL queries, stored procedures, and other artifacts necessary for data processing and analysis. Successful candidates should possess proficiency in relational databases such as Oracle, Microsoft SQL Server, PostgreSQL, MySQL/MariaDB, strong SQL skills, experience in building ELT pipelines and data integration solutions, familiarity with data modeling and warehousing concepts, and excellent analytical and problem-solving abilities. Effective communication and collaboration skills are also crucial for collaborating with cross-functional teams. If you are a dedicated professional with the required expertise and skills, we invite you to join our team and contribute to our continued success in delivering exceptional services to our clients.,

Posted 1 month ago

Apply

3.0 - 6.0 years

7 - 12 Lacs

Pune

Work from Office

We are seeking an experienced Databricks Developer with expertise in Delta Live Tables to join our team. The ideal candidate will possess a strong background in designing, developing, and maintaining data pipelines using Databricks and Delta Lake technologies. Key Responsibilities: Develop, implement, and optimize data pipelines and workflows using Databricks platform. Design and manage Delta Live Tables for real-time, streaming and batch data processing solutions. Monitor and troubleshoot data pipelines to ensure high performance, reliability, and data quality. Qualifications: 3 to 6 years of hands-on experience in Databricks platform development. Proven expertise in Delta Lake and Delta Live Tables. Strong SQL and Python/Scala programming skills. Experience with cloud platforms such as Azure, AWS, or GCP (preferably Azure). Familiarity with data modeling and data warehousing concepts.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies