Skip to content

Commit b54c324

Browse files
authored
Merge pull request #3 from Snigf12/Incomplete-Projec
Incomplete projec
2 parents 8f8b1b0 + 3ed33aa commit b54c324

File tree

9 files changed

+751
-5
lines changed

9 files changed

+751
-5
lines changed

Project/Output.PNG

161 KB
Loading

Project/README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
The file "buscar_pelotasVN_LaplaceLab.py" is a function with the artificial vision system.
2+
3+
This function returns [c1, c2, numx, numy] that represents the color (c1 -> Orange, c2 -> Green), and the coordinates from the top side of the Kinect sensor (numx and numy).
4+
5+
As shown in the right image "Output.png" the x coordinate is represented by the numx.
6+
the y coordinate is represented by the numy.

Project/SistemaFinal.py

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# -*- coding: utf-8 -*-
2+
import RPi.GPIO
3+
from numpy import array
4+
from buscar_pelotasVN_Lab import*
5+
import time
6+
7+
# Relacion de los pines GPIO Numero
8+
RPi.GPIO.setmode(RPi.GPIO.BOARD)
9+
10+
#Configuracion de pines de salida
11+
# Salida serial
12+
RPi.GPIO.setup(36,RPi.GPIO.OUT)
13+
14+
# Listo Raspberry
15+
RPi.GPIO.setup(38,RPi.GPIO.OUT)
16+
17+
#Configuracion de pines de entrada
18+
#Puede recibir datos: Vex Arm Cortex
19+
RPi.GPIO.setup(40,RPi.GPIO.IN)
20+
21+
#while (c1 is 0) and (c2 is 0):
22+
#while True:
23+
24+
25+
while True:
26+
# Call the vision system:
27+
# c1 -> bool, if True then target is orange
28+
# c2 -> bool, if True then target is green
29+
# numx -> float x distance from sensor in cm (horizontal distance)
30+
# numy -> float y distance from sensor in cm (depth distance)
31+
c1,c2,numx,numy=buscar_pelotasVN()
32+
#Convert to digital values
33+
if numy > 0:
34+
print('numx',numx,'numy',numy)
35+
#Convierto el valor entre 0 y 255 donde 2 m
36+
#es el máximo valor en metros de ym
37+
numy = int(255*numy/2)
38+
39+
print('Orange',c1,'Green', c2,'numx [cm]',numx,'numy [cm]',numy)
40+
41+
42+
43+
except KeyboardInterrupt:
44+
RPi.GPIO.cleanup()

Project/buscar_pelotasVN_Lab.py

Lines changed: 195 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
# -*- coding: utf-8 -*-
2+
# Se importan las librerías a usar
3+
from freenect import*
4+
from numpy import*
5+
from cv2 import*
6+
from time import*
7+
8+
def buscar_pelotasVN(): #Función principal para llamar desde programa principal
9+
#para transmisión serial a CORTEX
10+
11+
#Funcion de Adquisicion RGB kinect
12+
def frame_RGB():
13+
array,_ = sync_get_video()
14+
array = cvtColor(array,COLOR_RGB2BGR)
15+
return array
16+
17+
#Funcion para adquisicion de profundidad (depth) Kinect
18+
def frame_depth():
19+
array,_ = sync_get_depth()
20+
return array
21+
22+
#Función que retorna imagen binaria donde lo verde es blanco
23+
#y el resto es negro
24+
def filtLAB_Verde(img):
25+
lab = cvtColor(img, COLOR_BGR2Lab)
26+
# pongo los valores verdes para hacer la mascara
27+
#verde_bajo = array([80, 132, 0]) -> im1
28+
#verde_alto = array([244, 153, 110]) -> im1
29+
#verde_bajo = array([52, 141, 21]) -> im2
30+
#verde_alto = array([196, 156, 94]) -> im2
31+
verde_bajo = array([20, 76, 132])
32+
verde_alto = array([240, 121, 215])
33+
34+
mascara = inRange(lab, verde_bajo, verde_alto)
35+
36+
er = ones((7,7),uint8) #matriz para erosion
37+
38+
dil = array([[0,0,0,1,0,0,0],
39+
[0,1,1,1,1,1,0],
40+
[0,1,1,1,1,1,0],
41+
[1,1,1,1,1,1,1],
42+
[0,1,1,1,1,1,0],
43+
[0,1,1,1,1,1,0],
44+
[0,0,0,1,0,0,0]],uint8) #matriz para dilatacion
45+
46+
mascara = erode(mascara,er,iterations = 1) #aplico erosion
47+
mascara = dilate(mascara,dil,iterations = 1) #aplico dilatacion
48+
return mascara
49+
50+
def filtLAB_Naranja(img):
51+
lab = cvtColor(img, COLOR_BGR2Lab)
52+
# pongo los valores de rango naranja para hacer la máscara
53+
#naranja_bajo = array([51, 158, 69]) -> im1
54+
#naranja_alto = array([193, 202, 112]) -> im1
55+
#naranja_bajo = array([44, 166, 71]) -> im2
56+
#naranja_alto = array([170, 205, 106]) -> im2
57+
naranja_bajo = array([20, 136, 152])
58+
naranja_alto = array([235, 192, 198])
59+
60+
mascara = inRange(lab, naranja_bajo, naranja_alto)
61+
62+
er = ones((7,7),uint8) #matriz para erosion
63+
64+
dil = array([[0,0,0,1,0,0,0],
65+
[0,1,1,1,1,1,0],
66+
[0,1,1,1,1,1,0],
67+
[1,1,1,1,1,1,1],
68+
[0,1,1,1,1,1,0],
69+
[0,1,1,1,1,1,0],
70+
[0,0,0,1,0,0,0]],uint8) #matriz para dilatacion
71+
72+
# matriz para erosión y dilación
73+
mascara = erode(mascara,er,iterations = 1) #aplico erosión
74+
mascara = dilate(mascara,dil,iterations = 2)#aplico dilatacion
75+
return mascara
76+
77+
78+
79+
#Variables para retornar
80+
#resultado=[c1, c2, x1, x2, x3, x4, x5, x6, x7, x8, y1, y2, y3, y4, y5, y6, y7, y8]
81+
# Color_|________Coordenada_X_(xm)_______|_____Coordenada_Y_(ym)________|
82+
# Arreglo con la información que se envía de manera serial
83+
84+
85+
86+
#Parte principal
87+
# init=time() #medir tiempo
88+
89+
frame = frame_RGB() #leo frame
90+
depth = frame_depth() #leo profundidad depth
91+
depth = resize(depth,(0,0),fx=0.5, fy=0.5)
92+
93+
mascaraV = resize(frame, (0,0), fx=0.5, fy=0.5)
94+
mascaraN = mascaraV
95+
frame = mascaraV
96+
frame = medianBlur(frame,3)
97+
98+
color=time()
99+
mascaraV = filtLAB_Verde(frame)
100+
mascaraN = filtLAB_Naranja(frame)
101+
102+
# tc=time()-color #tiempo de filtro de color
103+
104+
#Encuentro los círculos que estén en detección de bordes
105+
circuloV = HoughCircles(mascaraV,HOUGH_GRADIENT, 1, 40, param1=60,
106+
param2=24,minRadius=0,maxRadius=0)
107+
108+
circuloN = HoughCircles(mascaraN,HOUGH_GRADIENT, 1, 40, param1=60,
109+
param2=24,minRadius=0,maxRadius=0)
110+
111+
#Para obtener la distancia depV y depN se utilizó la información de esta
112+
#página: https://openkinect.org/wiki/Imaging_Information (Agosto 18)
113+
#Esa regresión se le hicieron modificaciones para disminuir el error
114+
#hallando una aproximación de la forma 1/(Bx+C), donde x es el valor
115+
#en bytes obtenido por el sensor
116+
117+
#Para la alineación
118+
cteX=9
119+
cteY=9 #Valores alineación RGB y Depth
120+
#circle(rgb, (80-cteX,50+cteY),40,(0,0,255),5)
121+
122+
centimg = round(frame.shape[1]/2) #centro de la imagen donde son 0°
123+
#horizontal
124+
centVert= round(frame.shape[0]/2) #centro vertical
125+
126+
#Si encontro al menos un ciculo
127+
if circuloV is not None:
128+
circuloV = circuloV.astype("int")
129+
xV = circuloV[0,0,0]
130+
xVd=xV + cteX
131+
yV = circuloV[0,0,1]
132+
yVd=yV + cteY
133+
verde=True
134+
if xVd >= frame.shape[1]:
135+
xVd = 319
136+
if yVd >= frame.shape[0]:
137+
yVd = 239
138+
#para obtener dato es en coordenada (y,x)->(480x640)
139+
depV = 1/(depth[yVd,xVd]*(-0.0028642) + 3.15221)
140+
depV = round(depV,4) #cuatro cifras decimales
141+
if depV < 0:
142+
depV=0
143+
#depV = ((4-0.8)/2048)*(depth[xVd,yVd]+1)+0.8 aprox propia
144+
else:
145+
verde = False
146+
147+
if circuloN is not None:
148+
circuloN = circuloN.astype("int")
149+
xN = circuloN[0,0,0]
150+
xNd=xN + cteX
151+
yN = circuloN[0,0,1]
152+
yNd=yN + cteY
153+
naranja=True
154+
if xNd >= frame.shape[1]:
155+
xNd = 319
156+
if yNd >= frame.shape[0]:
157+
yNd = 239
158+
159+
#para obtener dato es en coordenada (y,x)->(480x640)
160+
depN = 1/(depth[yNd,xNd]*(-0.0028642) + 3.15221)
161+
depN = round(depN,4) #cuatro cirfras decimales
162+
if depN < 0:
163+
depN=0
164+
else:
165+
naranja = False
166+
167+
if naranja or (verde and naranja):
168+
c1,c2=1,0
169+
bethaN = abs(centVert - yNd)*0.17916 #0.17916 son °/Px en vertical (43°/240)
170+
bethaN = (bethaN*pi)/180
171+
depN = depN*cos(bethaN) # centro valor vertical para ubicar la distancia en 0° Vertical
172+
alphaN = (xNd - centimg)*0.1781 #0.1781 son los grados por pixel (°/px) 320 x 240
173+
alphaN = (alphaN*pi)/180 # en radianes
174+
xm = depN*sin(alphaN)
175+
ym = depN*cos(alphaN)
176+
elif verde and (not naranja):
177+
c1,c2=0,1
178+
bethaV = abs(centVert - yVd)*0.17916 #0.17916 son °/Px en vertical (43°/240)
179+
bethaV = (bethaV*pi)/180
180+
depV = depV*cos(bethaV) # centro valor vertical para ubicar la distancia en 0° Vertical
181+
alphaV = (xVd - centimg)*0.1781 #0.1781 son los grados por pixel (°/px)
182+
alphaV = (alphaV*pi)/180 # en radianes
183+
xm = depV*sin(alphaV)
184+
ym = depV*cos(alphaV)
185+
else:
186+
c1,c2=0,0
187+
xm,ym=0,0
188+
t=time()-init
189+
## imshow('VERDE',mascaraV)
190+
## waitKey(1)
191+
## imshow('NARANJA',mascaraN)
192+
## waitKey(1)
193+
print('FIN',t,'EDGE',te,'COLOR',tc)
194+
print(c1,c2,xm,ym)
195+
return c1,c2,xm,ym

README.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,21 @@
22
Spheres Recognition Color-Depth
33

44
Hi there! This is my first project,
5+
First version
56

6-
The first version will be uploaded in November of 2016
7-
Will be a program developed with python, Kinect and Raspberry Pi 1 B+
8-
to recognize spheres and their position on coordinates x, y, in cm respect the position of the Kinect Sensor.
7+
This is an artifitial vision system for robotics application, developed with Python and OpenCV, acquiring the images with a Kinect sensor and processing them with a Raspberry Pi 3 Model B.
98

10-
Recognize only two colors (orange and green).
11-
Will be used xBox360 Kinect Sensor - 1414
9+
The system recognizes spheres and their position on coordinates x, y, in cm respect the position of the Kinect Sensor.
1210

11+
Recognizes only two colors (orange and green).
12+
The Kinect sensor used is the xBox360 Kinect Sensor - 1414
13+
14+
1. Install Raspbian on your Raspberry Pi - https://www.raspberrypi.org/downloads/
15+
2. Install the OpenCV library for Python - http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/
16+
3. Install the Numpy library for python - pip install numpy
17+
4. Install libfreenect to be able to use Kinect sensor - Nice tutorial -> https://naman5.wordpress.com/2014/06/24/experimenting-with-kinect-using-opencv-python-and-open-kinect-libfreenect/ AND For more information about the OpenKinect community -> https://openkinect.org/wiki/Main_Page
18+
19+
For finding spheres, this system uses the HougCircle method. If the green and orange colors are not well filtered, you can change the ranges of the colors desired, it is used the Lab colorspace.
20+
1321
Thanks,
1422
Snigf12

0 commit comments

Comments
 (0)