🌟 Exciting Career Opportunity at Tailwyndz! 🌟 We're growing our Data Science (CoE) Fleet at Tailwyndz, and we’re on the lookout for passionate Data Scientists to join the journey! 🚀 If you're someone who: --> Has 2+ years of hands-on experience in Data Science/ML --> Loves turning data into powerful insights --> Thrives in a fast-paced, collaborative environment --> Is eager to push boundaries with cutting-edge tech …then we want to hear from you! 📍 Location: Chennai (Hybrid Mode) ⚡ Immediate Joiners Preferred Join us in building smart, scalable solutions that drive business impact and innovation at the heart of Tailwyndz. If you're looking to elevate your career and work with a bold, future-focused team — this is your moment. 📩 Apply now: Send your resume to recruitment@tailwyndz.com Let’s shape what’s next — together. 💡 #DataScience #Hiring #CareerOpportunities #Tailwyndz #Innovation #ChennaiJobs #DataScientists #MLCareers #TechTalent
Job Description A Data Engineer Extraordinaire will possess masterful proficiency in crafting scalable and efficient solutions for data processing and analysis. With expertise in database management, ETL processes, and data modelling, they design robust pipelines using cutting-edge technologies such as Apache Spark and Hadoop. Their proficiency extends to cloud platforms like AWS, Azure, or Google Cloud Platform, where they leverage scalable resources to build resilient data ecosystems. This exceptional individual possesses a deep understanding of business requirements, collaborating closely with stakeholders to ensure that data infrastructure aligns with organizational objectives. Through their technical acumen and innovative spirit, they pave the way for data-driven insights and empower organizations to thrive in the digital age Key Responsibilities Develop and maintain cutting-edge data pipeline architecture, ensuring optimal performance and scalability. Building seamless ETL pipeline for diverse sources leveraging advanced big data technologies Craft advanced analytics tools that leverage the robust data pipeline, delivering actionable insights to drive business decisions Prototype and iterate test solutions for identified functional and technical challenges, driving innovation and problem-solving Champion ETL best practices and standards, ensuring adherence to industry-leading methodologies Collaborate closely with stakeholders across Executive, Product, Data, and Design teams, addressing data-related technical challenges and supporting their infrastructure needs Thrive in a dynamic, cross-functional environment, working collaboratively to drive innovation and deliver impactful solutions Required Skills and Qualifications Proficient in SQL, Python, Spark, and data transformation techniques Experience with Cloud Platforms AWS, Azure, or Google Cloud (GCP) for deploying and managing data services Data Orchestration Proficient in using orchestration tools such as Apache Airflow, Azure Data Factory (ADF), or similar tools for managing complex workflows Data Platform Experience Hands-on experience with Databricks or similar platforms for data engineering workloads Familiarity with Data Lakes and Warehouses Experience working with data lakes, data warehouses (Redshift/SQL Server/Big Query), and big data processing architectures Version Control & CI/CD Proficient in Git, GitHub, or similar version control systems, and comfortable working with CI/CD pipelines Data Security Knowledge of data governance, encryption, and compliance practices within cloud environments Problem-solving Analytical thinking and problem-solving mindset, with a passion for optimizing data workflows Preferred Skills and Qualifications Bachelor's degree or equivalent degrees in computer science, Engineering, or a related field 3+ years of experience in data engineering or related roles Hands-on experience with distributed computing and parallel data processing Good to Have Streaming Tools Experience with Kafka, Event Hubs, Amazon SQS, or equivalent streaming technologies Experience in Containerization Familiarity with Docker and Kubernetes for deploying scalable data solutions Engage in peer review processes and present research findings at esteemed ML/AI conferences such as NIPS, ICML, AAAI and COLT Experiment with latest advancements in Data Engineering tools, platforms, and methodologies. Mentor peers and junior members and handle multiple projects at the same time Participate and speak at various external forums such as research conferences and technical summits Promote and support company policies, procedures, mission, values, and standards of ethics and integrity Certifications in AWS, Azure, or GCP are a plus Understanding of modern data architecture patterns, including the Lambda and Kappa architectures